The World Is Not Enough: Google and the Future of Augmented Reality

The new Google FieldTrip app probes the question: What digital information do you want to see overlaid on the physical world?

regenesis.jpg
regenesis.jpg

A book in The Future.

It is The Future. You wake up at dawn and fumble on the bedstand for your (Google) Glass. Peering out at the world through transparent screens, what do you see?

If you pick up a book, do you see a biography of its author, an analysis of the chemical composition of its paper, or the share price for its publisher? Do you see a list of your friends who've read it or a selection of its best passages or a map of its locations or its resale price or nothing? The problem for Google's brains, as it is for all brains, is choosing where to focus attention and computational power. As a Google-structured augmented reality comes closer to becoming a product-service combination you can buy, the particulars of how it will actually merge the offline and online are starting to matter.

To me, the hardware (transparent screens, cameras, batteries, etc) and software (machine vision, language recognition) are starting to look like the difficult but predictable parts. The wildcard is going to be the content. No one publishes a city, they publish a magazine or a book or a news site. If we've thought about our readers reading, we've imagined them at the breakfast table or curled up on the couch (always curled up! always on the couch!) or in office cubicles running out the clock. No one knows how to create words and pictures that are meant to be consumed out there in the world.

This is not a small problem.

* * *

I'm sitting with Google's former maps chief John Hanke in the company's San Francisco offices looking out at the Bay's islands and bridges, which feel close enough to touch. We're talking about Field Trip, the new Android app his 'internal startup' built, when he says something that I realize will be a major theme of my life for the next five or 10 years. Yours too, probably.

But first, let me explain what Field Trip is. Field Trip is a geo-publishing tool that gently pushes information to you that its algorithms think you might be interested in. In the ideal use case, it works like this: I go down to the Apple Store on Fourth Street in Berkeley and as I get back to my car, I hear a ding. Looking at my phone, I see an entry from Atlas Obscura, which informs me that the East Bay Vivarium -- a reptilian wonderland that's part store, part zoo -- is a couple of blocks away. That sounds neat, so I walk over and stare at pythons and horned dragons for the next hour. Voila. "Seamless discovery," as Hanke calls it.

Dozens of publishers are tagging their posts with geocodes that Field Trip can hoover up and send to users now. Hanke's team works on finding the right moment to insert that digital information into your physical situation.

And when it works well, damn does it work well.

It's only a slight exaggeration to say that Field Trip is invigorating. That is to say: It makes life more interesting. And since I switched back to my iPhone after a one-week Android/Field Trip test, it's the one thing that I really miss.

At first, I was tempted to write off this effort as a gimmick, to say that Field Trip was a deconstructed guide book. But the app is Google's probe into the soft side of augmented reality. What the team behind it creates and discovers may become the basis of your daily reality in five or 10 years. And that brings me back to Hanke's comment, the one you could devote a career to.

"You've got things like Google Glass coming. And one of the things with Field Trip was, if you had [Google Glass], what would it be good for?" Hanke said. "Part of the inspiration behind Field Trip was that we'd like to have that Terminator or Iron Man-style annotation in front of you, but what would you annotate?"

There's so much lurking in that word, "annotate." In essence, Hanke is saying: What parts of the digital world do you want to see appear in the physical world?

If a Field Trip notification popped up about John Hanke, it might tell you to look for the East Bay hipster with floppy hair almost falling over his eyes. He looks like a start-up guy, and admits to being one despite his eight years at Google. He refers to its cofounders like old college friends. ("Sergey was always big on, 'You should be able to blow stuff up' [in Google Earth].") Not a kid anymore, Hanke sold an early massively multiplayer online gaming company to the legendary Trip Hawkins in the '90s, then co-founded Keyhole, which became the seed from which Google's multi-thousand person map division grew.

When maps got too big for Hanke's taste, he "ultimately talked with Larry" [Page], and figured out how to create an "autonomous unit" to play with the company's geodata to create novel, native mobile experiences. This is Google's Page-blessed skunkworks for working on this very specific problem. They are Google but they have license to be unGoogle.

"You don't want to show everything from Google Maps. You don't want to show every dry cleaner and 7-Eleven in a floating bubble," Hanke said. "I want to show that incremental information that you don't know. What would a really knowledgeable neighborhood friend tell you about the neighborhood you're moving through? He wouldn't say, 'That's a 7-Eleven. That's a fire hydrant.' He would say, 'Michael Mina is opening this new place here and they are going to do this crazy barbecue thing.' "

Some companies, like Junaio, are working on augmented-reality apps that crowdsource location intelligence through Facebook Places and FourSquare checkins. Hold up your phone to the world and it can tell you where your friends have been. It's a cool app, and certainly worth trying out, but there isn't much value in each piece of information that you see. The information density of of that augmented reality is low, even if it is socially relevant. If you're opting into 24/7 AR through something like Glass, that cannot be the model.

* * *

Google had previously offered up a vision of how Glass might be used in a video they released earlier this year to pretty much universal interest.

But consider the cramped view of augmented reality you see here. What information is actually overlaid on the world?

  • The weather

  • The time

  • An appointment

  • A text message

  • Directions

  • Interior directions (within a bookstore? Right.)

  • A location check on a friend

  • A check in

You can see why Google would put this particular vision out there. It's basically all the stuff they've already done repackaged into this new UI. Sure, there's a believable(ish) voice interface and a cute narrative and all that. But of all the information that could possibly be seamlessly transmitted to you from/about your environment, that's all we get?

I'm willing to bet that people are going to demand a lot more from their augmented reality systems, and Hanke's team is a sign that Google might think so, too. His internal startup at Google is called Niantic Labs, and if you get that reference, you are a very particular kind of San Francisco nerd. The Niantic was a ship that came to California in 1849, got converted into a store, burned in a fire, and was buried in the city. Over the next hundred and twenty-five years, the ship kept getting rediscovered as buildings were built and rebuilt at its burial site. Artifacts from the ship now sit in museums, but a piece of the bow remains under a parking lot near the intersection of Clay and Sansome in downtown San Francisco.

Now, not everyone is going to want to know the story of the Niantic, at least not as many people as who want to know about the weather. And the number of people who care about a story like that -- or one about a new restaurant -- will be strongly influenced by the telling. The content determines how engaging Field Trip is. But content is a game that Google, very explicitly, does not like to play. Not even when the future prospects of its augmented reality business may be at stake.

The truth is, most of the alerts that Field Trip sent me weren't right for the moment. I'd get a Thrillist story that felt way too boostery outside its email-list context. Or I'd get a historical marker from an Arcadia Publishing book that would have been interesting, but wasn't designed to be consumed on my phone. They often felt stilted, or not nearly as interesting as you'd expect (especially for a history nerd like me). You can handtune the sorts of publications that you receive, but of the updates I got, only Atlas Obscura (and Curbed and Eater to a lesser extent) seemed designed for this kind of consumption. No one else seemed to want to explain what might be interesting about a given block to someone walking through it; that's just not anyone's business. And yet stuff that you read on a computer screen at home has got to be different from stuff that you read in situ.

What happens when the main distribution medium for your work is that it's pushed to people as they stumble through the Mission or around Carroll Gardens? What possibilities does that open up? What others does it foreclose?

"Most of the people that are publishing now into Field Trip are publishing it as a secondary feed," Hanke told me. "But some folks like Atlas Obscura. They are not a daily site that you go to. They are information on a map. They are an ideal publishing partner."

They are information on a map. That's not how most people think of their publications. What a terrifying vision for those who grew up with various media bundles or as web writers. But it's thrilling, too. You could build a publication with a heatmap of a city, working out from the most heavily traveled blocks to the ones where people rarely stroll.

Imagine you've got a real-time, spatial distribution platform. Imagine everyone reading about the place you're writing about is standing right in front of it. All that talk about search engine and social optimization? We're talking geo-optimization, each story banking on the shared experience of bodies co-located in space.

creepyvan2.jpg
creepyvan2.jpg

* * *

What role will Google play in all this? Enabler and distributor, at least according to their current thinking. And on that score, there are a few kinks to work out.

One, in an augmented-reality world, you need really good sound mixers. Too often, Field Trip would chime in and my music would cut as I walked through streets. This is a tough request, but I want to be informed without being interrupted. It's a small thing, but the kind of thing that makes you bestow that most hallowed of compliments, "It just works." Take this also as an example of how important all kinds of "mixes" are going to be. An augmented-reality annotation will be a live event for the reader; the production has to be correct.

Second, there is a reason that the Google Glass video above was parodied with a remix that put ads all over the place:

No one believes that Google will make products that don't create a revenue stream. Hanke's got at least a partial answer to this one. Certain types of cool services like Vayable, a kind of Airbnb for travel experiences, will be a part of the service. And that's good, even if I do expect that a real Google Glass would look as much like the bottom video as the top.

Even if you don't mind the ads, Google would have to master the process of showing them to you. That's something that Niantic is putting a lot of thought into. Hanke frames it as a search for the right way to do and show things automatically on the phone. We're used to punching at our screens, but in this hypothetical future, you'd really need a little more help.

"This seamless discovery process, doing things automatically on the phone," he said. "I think it's a whole new frontier in terms of user interface. What's the right model there? How do you talk to your user unprompted?"

Hanke's not the only person at Google thinking about these things, even if he is one of the most interesting. Google Now, the personal-assistant app unveiled in June, is traveling over a little bit of the same territory. Their challenge is to automatically show you the pedestrian things you might want to know.

"Google Now is probably the first example of a new generation of intelligent software," Hugo Barra, director of product management for Android, told me. "I think there will be a lot more products that are similarly intelligent and not as demand-based."

So, if you search for a flight, it'll stick a little card in your Google Now that tracks when the flight is due to leave. It'll show you sports scores based on the city you're in. It can tell you when you need to leave for appointments based on current traffic conditions. And it can tell you the weather. "We've unified all these backends. Things you've done in [search] history, the place where you are, the time of the day, your calendar. And in the future, more things, more signals, the people you're with. Google can now offer you information before you ask for it," Barra continued. "It's something the founders have wanted to do for a long time."

Google Now, in other words, is the base layer of the Glass video, or of any Google AR future. It's the servant that trains itself. It's the bot that keeps you from having to use your big clumsy thumbs.

In my week of testing, I liked Google Now, but I didn't love it. Very few "automagic" things happened, even after a week of very heavy use. I rarely felt as if it was saving me all that much time. (Anyone have this experience with Siri, too? I sure have.) And while the traffic alerts tied to my calendar were legitimately awesome, if Google Now's info was all that was embedded in my heads-up display, I'd be seriously disappointed.

Again, as with FieldTrip (not to mention Junaio), the problem is content. Google's great with structured data -- flight times, baseball box scores -- but it's not good with the soft, squishy, wordy stuff.

* * *

Perhaps a writer's task has always been to translate what is most interesting about the world into a format that people can understand and process quickly. We perform a kind of aggregation and compression, zipping up whole industries' fortunes in a few short sentences. But if an augmented-reality future comes to pass, and I think it will in one form or another, this task will really be laid bare. Given a city block, the challenge will be to excavate and present that information which the most people are curious about at the precise moment they walk through it. Information on a map at a time.

Some of what legacy (print and digital) media organizations produce might work nicely: Newspapers publish (small amounts of) local news about a city. Patch could provide some relevant updates about local events. Some city weeklies (OC Weekly!) do a fantastic job covering shows and scenes and openings (while muckraking along the way). But everyone is still fundamentally writing for an audience made up of people who they expect are at their computers or curled up on the couch. The core enterprise is not to create a database of geo- and time-tagged pieces of text designed to complement a walk (or drive) through a place.

findery_615.jpg
findery_615.jpg

What you need are awesome "digital notes" out there in the physical world. That's what Caterina Fake's Findery (née Pinwheel) is trying to create. People can leave geocoded posts wherever they want and then other people can discover them. It, like Junaio, is very cool. But the posts lack the kind of polish that I'd voluntarily opt into having pushed to my AR screen. I wouldn't want to have to sift through them to find the good stuff.

To me, in the extremely attention-limited environment of augmented reality, you need a new kind of media. You probably need a new noun to describe the writing. Newspapers have stories. Blogs have posts. Facebook has updates. And AR apps have X. You need people who train and get better at and have the time to create perfect digital annotations in the physical world.

Fascinatingly, such a scenario would require the kind of local knowledge newspaper reporters used to accumulate, and pair it with the unerring sense of raw interestingness that the best short-form magazine writers, bloggers, tweeters, and Finderyers cultivate.

Back to the future.





More From The Atlantic