Blog | Software outsourcing information

Before Google Glass

Written by Andy Hilliard | Jan 24, 2013

When Google released plans earlier this year revealing that they were developing Internet-capable technology similar to eye glasses, there was a palpable buzz in the tech world.

Google Glass captures the imagination of many as the future of innovation. It has been described so far as a device worn on the face, with a user display that reacts with head movements.

It is an interactive Internet gateway that could potentially revolutionize how we interact with each other and the world. But is it that simple?

Think about the way 3D TV was heralded not so long ago. It is safe to say that most people have not caught onto this yet, and maybe they never will. Earlier this year I bought a 3D-enabled TV, and for a while my family and I were always watching 3D movies, but it’s probably been at least six months since we have bothered to put the glasses on and watch anything in 3D.

It will take more than just the release of Google Glass to spur a real revolution with its use. People have to feel that there is more value than just having the same capabilities they can get on their smart phones or computers. Having the technology is the first step, but it also will require a combination of user-interface skills and cloud computing to provide the data, information and the logic for delivering information.

App development for this new technology will need to take advantage of cues and inputs, which you can’t really do with a phone in your hands. You can shake your phone, and that action has a certain meaning or is detectable by the software and can be interpreted in one way or another. Apps in Google Glass will need a discernible way to detect movement, and it’s questionable if eye movement will be detectable enough. This is where the user interface abilities comes into play, in being able to display the information based on how the person – the wearer – of the glasses is moving and then having the computational capability to go somewhere into the cloud and not directly onto the glass.

The concept for this technology is not new. We have seen it used in the movie “Minority Report”, where Tom Cruise’s character uses gesture interface to showcase human computer interaction. In Neal Stephenson’s book “Snow Crash”, the main character Hiro Protagonist explores the Metaverse where gestures and movements dictate and help him navigate through information to find the answers to questions.

To create this kind of technology, there will be a need for really good programming talent. The question remains: Who is going to develop the next great app that takes that leap?