Tag: google glass

  • Technology fades into the background

    At this year’s Google I/O developer conference, CEO Sundar Pichai spoke of how augmented reality (AR) glasses embedded with Google’s real-time translation services could break down the language barrier in face-to-face communication. While not explicitly announcing any hardware, he did show a video with a pair of glasses with a heads-up display that would show the results of Google’s real-time translation technology as “subtitles for the world.”

    Google Translate + AR Glasses = Subtitles for the World

    Taking another run at the ill-fated Google Glass vision is a game-changer and speaks to the maturity and deep pockets of Google as a corporation. Taking lessons learned from Google Glass 1.0 the company has improved the technology to a point where it’s less interruptive (and doesn’t make you look like a cyborg) and ready for more widespread adoption.

    We are tiptoeing into the post-computer world where “technology fades into the background” and allows us to push away the unnatural hardware interfaces and interruptive notifications from the human-to-human interaction and realize the true vision of AR – to augment the world around you.

    Combine this “subtitles for the world” mentality to another Google Lens enhancement, Scene Exploration and now you have useful metadata from Google’s Knowledge Graph overlayed on the world around you. Check out the video below which jumps to the demo of how Google envisions you can use Scene Exploration to learn about the contents of items on the shelf at the grocery store.

    Google Scene Exporation demo at 7:00

    Exciting times! Caveat is, as with all real-world technology, things will be rough in the beginning. I work in a Japanese company and sometime we turn on the real-time translation AI in Google Hangouts to see if we can get a decent translation of the meeting. Let me just say the results are not quite there yet. As Pichai said, there’s a lot of work to do.

    The competition has not stood still either. We also have Facebook’s Smart Glasses focused, as you would expect, on the capture and sharing features with a light that goes on to warn you if someone is filming. Snapchat’s Spectacles (pictured below) overlay 3D filters over what you look at thru their glasses bring the Snapchat Lens experience to the world around you, leave the psychedelics at home. The future is here, we just need to improve the software.

    Snapchat Spectacles 3

  • Google Glass and Time Travel

    A lot has been written about how Google Glass will be great for those that put on a pair. Immediate access to the world’s most powerful database, push alerts from your closest friends, a voice UI so you can look up directions without having to look down at your phone, a  camera that lets you take a photo and share a moment, all without leaving that moment.

    While these are all powerful use cases that are bound to transform how we interact with the world around us, I’m more excited for the capability of Google Glass to annotate the physical world as we travel through it for those that come after us, especially those that can re-experience that world, as we saw it, in context. Imagine being able to take a photo of Notre Dame in Paris today, on a trip with your family and saving those photos with all the GPS data so the photo has a place, on a map, in time. Add a community and you have a series of photos of a place, all taken from different perspectives. This, of course, is flickr’s world map – announced in 2004 under the tagline, “eyes of the world.”

    While a picture is worth 1,000 words, what if you could add more context. What if you could add more text to your photo? Tell a story that shared how this photo, in this place, was important to you? This is Findery.com, a place where people leave notes for each other in space and time. As described in their FAQ,

    Findery is made of notes. A note can be a story, advice, jokes, diatribes, information, memories, facts, advertisements, love letters, grocery lists and manifestos. The content of a note is only limited by your imagination. A note can be shared with the world, one to many people, one to one, or only with yourself.

    Findery and the Flickr Map are compelling maps experiences but imagine how powerful they could be if you could experience them in situ. The mobile versions of Flickr and Google+ get at this with a Nearby feature. This sort allows you to browse photos that are nearby to your GPS location. I’ve used it a few times but rarely is it compelling. Even if the photos are only a block away, they lose their connective tissue.

    While Google Glass is interesting as an information capture device, the possibility of a viewing device that can potentially line up photos that are taken at the same place is something that really excites me. Once you have a head’s up display connected to a vast library of GPS-tagged photos you can enable clever overlays that show you not only the space around you but also that same space through time.

    Check out OldSF – it’s a completely voluntary effort where two folks came together and took the time to put a bunch of photos from the San Francisco Public Library on to a map so you can browse through them. One of the founders of OldSF blogged about the thrill of overlaying one photos from the past and fading to the present (and back again) where you can basically time travel in real life. It’s a genre called, Now and Then photography most recently cataloged in the site, Dear Photograph

    Imagine being able to pull up photos from your past, your father’s past, or your grandparent’s past. Ask Google Glass for directions to the nearest pinned memory and then bring it up in your glasses and be able to see that moment, captured in time, while standing on the very spot the photographer stood. Add voice annotation, capture some audio. It’s that moment that puts goosebumps on my arms. It’s that moment, reliving history, your personal history, that makes me excited to try out Google Glass someday.