There's this concept for phones at the moment, that the developers say will allow you to see areas not directly visible to you. The article is written in a slightly feverish puerile tone to make you think of peering through the walls of the ladies' room. Big deal, really. They're missing the two most important developments here.
One: (which I covered in brief in this previous post ) details a concept much like the above already, which would provide "balloon help" of the form of social tags people placed, using their mobile phones, at specific GPS locations. I said I'd pushed for the development of such a system four - five years ago at a particularly unimaginative and (as it turns out) loser company, but with added features. More on this in a moment.
Two: Which my previous post also sort of covered, is the use of images, not just text tags.
Please note that from here on, I'm doing it again - I'm revealing an idea that, had said previous employer taken the development on, would have meant they had a three year headstart on the competition, and if a cellphone or software development company were to take it on now, it would put them a year, maybe three, ahead of their competitors even now. It's an edge.
And the idea that I had, and which is still applicable, is the leveraging the use of crowds, as all the best apps do, to provide rich content, which is what sells your application to consumers, for minimal cost - which is what sells the application to your accountants...
Take a device that's capable of geolocation, such as a GPS enabled phone or PDA. Now make sure it has a camera on it as well. That's all the end user needs.
Now take a server side software that can accept photos from those devices, which are tagged with GPS information, and pass them through a photo stitch process and can use salient features in the images to decide where to stitch, then use other GPS tagged photos (taken from slightly different locations) to figure out the orientation of the photo stitch, and each photo that composes the stitch.
You see the point? I can now take a photo, the server compares it to existing pictures in my location as given y my GPS, and from that it figures out what direction I'm facing, what azimuth and elevation the camera was at. No need for accelerometers and compasses.
In fact, the server can probably tell me to within three degrees, what direction my camera was faced. More than that, it can now offer me a range of options to do with my location. For instance, it can provide me with images that look like the walls are transparent, i.e. I can see pictures other people have previously taken of the unseen parts. It means I am so much better prepared when I go through a door (for instance) wheeling a trolley full of documents, and won't snag on the photocopier just inside the door and to the left.
I also want to be able to view adverts for businesses in the building I just photographed, reviews of them (that's the social tagging part of the application) and see a gallery of pictures. Did you just spot that? "I also want to be able to view adverts for businesses in the building" the company building this gets to sell location based targeted advertising - and users are going to want it! Because they want to know about that shop they're outside of.
Why are the users going to provide the content? Well, to be informed of a place or feature they will need to take a picture of it. That picture, if it's better than others or fills in a missing detail, will become content. And because we all have opinions and like to make them known, tags and reviews will flow in too. A good piece of software will choose the most common themes out of the reviews and synthesise them into the popular voice, as well as providing anyone that wants it with all the original reviews and tags. At data prices, but there you go. Take as much or as little as you feel you need.
The utility doesn't end there though. Suppose I wanted to go to Noddy's Noodle Nirvana in my home city of Nedlington, Nebraska. No problem. I search for Noddy's in the database using my phone, and it gets preloaded into my GPS/mapping function of the phone. I can see pictures of the place and surroundings, find places to park nearby. In effect, I've found my way to Noddy's before even leaving, scouted the layout, planned my parking or bus route, and I'm now on my way.
I arrive there, and Noddy's nutty noodle chef has set the place ablaze. I'm trapped inside. But the phone knows I'm there, and the fire alarm has triggered a message to my phone showing me where the exits are. I make it outside and see the Fire Chief using his mobile device to call up a plan of the building, where the lifts and exits are, the hydrants, structural details, members of his team. Anyone with a similarly equipped phone, the Chief is fully aware of where they are at any given moment, until they leave the mission zone. Am I out in the muster area? Trapped in a stairwell? 200 yards away and out of the picture? My phone will have let the emergency teams know. Need to access surveillance cameras and see what's happening? Sure, the system knows where each camera is and if you have the clearance, you get to use them.
Is that not what a Killer App comprises of? Useful, money-making, life-saving, simple to implement? If you work for a cellphone or software developer and find this idea just grabs you, feel free to use my Paypal link and make a huge donation... %)
No comments:
Post a Comment