Google Now: Bringing Us One Step Closer to the Star Trek Computer

1SPWoAOIrsaPhH88Uup45tBeBvgs0Kpq8MxRN.png
1SPWoAOIrsaPhH88Uup45tBeBvgs0Kpq8MxRN.png

Should I bring an umbrella to work today? Is there a good pizza place nearby? Is there a gas station on the way to Poughkeepsie, or should I fill up now? 

If you are a human, these questions are easy to understand, and often easy to answer. If you are a computer, though, you will likely have a hard time making sense of them: They're questions, not queries. Analyzing them requires contextual knowledge about the speaker and the thing the speaker is looking for. And their search terms are merely implied. 



Latest Politics Posts:
Loading feed...

Today, though, Google is taking one more step toward answering questions like these -- this one in the direction of search returns that are personalized, and predictive, and presented rather than asked for. Google Now, debuted at last year's Google I/O conference, is now available on iPhones and iPads, expanding its presence across mobile devices. The feature, Google engineer Andrea Huey explains, is all about "giving you just the right information at just the right time. It can show you the day's weather as you get dressed in the morning, or alert you that there's heavy traffic between you and your butterfly-inducing date -- so you'd better leave now! It can also share news updates on a story you've been following, remind you to leave for the airport so you can make your flight and much more. There's no digging required: cards appear at the moment you need them most -- and the more you use Google Now, the more you get out of it."

This is much more than a feature update. The kind of intuitive relationship between user and data Huey is describing -- the conversational default -- is "kind of the next step for where search is going," Scott Huffman, Google's vice president of engineering, told me. Google's engineers, he said, have long been preoccupied with the idea of search not just as a box on a screen, but as a personal, and personalized, assistant. And "as we thought about it," he says, "a really great assistant brings you information before you ask for it." An assistant, ideally, knows your desires even better, and even sooner, than you do: Siri, but smarter. So Google has applied that longstanding workplace logic to the mobile devices that help you navigate the world. The basic idea of Google Now, Huffman says -- and the logic that is driving its extension across different mobile platforms -- is Google working "as a powerful assistant that wants to help me get through my day."

Which does not mean (um, yet?) a Google product that can read your mind, or even your voice. The capabilities here are still very much in their early stages. But the extension of Google Now, and the double-down on its approach to search, suggests how Google sees itself within an environment that finds the firm's core competency -- searching the Internet -- competing with newways of organizing the world's information. Ways that often treat information exactly as it is: discursive and dynamic and, ultimately, personal. "Google Now is probably the first example of a new generation of intelligent software," Hugo Barra, director of product management for Android, put it. Will you hit traffic in your regular commute? Google would like to warn you. Is there a cool museum nearby? Google would like to tell you. Has your flight been delayed? Google would like to break the bad news.

With its investments in Google Now, Google is moving away from, or at least expanding, the interface that has driven search since its earliest days -- "keywords in a box," Huffman puts it -- to something the firm hopes will be more sophisticated and intuitive and, for better or for worse, friction-free. This is, or it's trying to be, the Google that knows you. The Google that reads you. The Google that treats you, to some extent, as the site to be indexed. "Our goal," Larry Page put it in a recent earnings call, "is to get you the right information, at just the right time." Google is betting that, armed with its deep knowledge of users and the world they live in, it will understand what "just the right information" and "just the right time" actually are -- almost as well as, and sometimes even better than, you do.

One key component of that bet is a related technology: voice control. More than half of the U.S. population now owns smartphones with voice capabilities, a Google rep pointed out to me, and -- per a survey the firm conducted -- two in three of them are aware of those capabilities. Already, Google's voice commands allow users to do things like set timers, send texts, dictate notes, and, of course, search for stuff on the Internet. But there's a broader market to be tapped here, Google believes. "Voice commands are going to be increasingly important," Page declared during the same earnings call, noting the obvious ("it's just much less hassle to talk than type"). 

After all, one of the most crucial skills of a good assistant is communication: He or she has to be able to listen and reply to you effectively for anything else to make much difference. Haptic commands, fingers on a keyboard or screen, can be a clunky way to have a conversation, Huffman points out, "whereas voice is much more natural." That's not merely a matter of convenience. Voice lends itself to a kind of dialogue -- to an interaction with a device that seems to take place on relatively human terms -- much more readily than fingers do. Siri may be far from perfect, but it (she?) is onto something big in that respect. Voice, Huffman says, is "just a much more powerful way to interact with a mobile device."

That kind of interaction, for the most part, has only recently been a possibility. While, sure, we've had incremental steps along the way -- Dragon dictation software, Siri and her predecessors -- engineers have only recently had the tools to convert "voice" into "interface." "It's really the first time in history," Huffman says, that the necessary technological elements have come together to allow a person to talk to a computer -- and, by extension, to talk to the stuff that the computer implicitly contains. First, you needed voice recognition capability that could keep up with the idiosyncrasies and the speed of typical speech. Then, you needed the intelligence aspect -- natural language processing and understanding -- that could convert sounds into words into meaning. Finally, you needed a knowledge graph: the interconnected and structured data that could offer people the answers they were looking for.</p><p>In 2010, <a href="http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html" mce_href="http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html">Google acquired Metaweb</a>, a firm that maintained "an open database of things in the world." Freebase, at the time, offered information about more than 12 million of those things, including "movies, books, TV shows, celebrities, locations, companies and more." Under Google, it now offers much more. And the database has provided Google, in turn, with interconnected and structured data <a href="http://www.theatlantic.com/technology/archive/2012/06/inside-googles-plan-to-build-a-catalog-of-every-single-thing-ever/258579/" mce_href="http://www.theatlantic.com/technology/archive/2012/06/inside-googles-plan-to-build-a-catalog-of-every-single-thing-ever/258579/">that along with Wikipedia and other sources now inform Google's Knowledge Graph</a> -- a way for Google to apply the logic of the semantic web to everyday consumer products. It's a way of knowing things, even if you don't know the right questions to ask. It's a way, Metaweb founder <a href="http://www.theatlantic.com/technology/archive/2012/06/inside-googles-plan-to-build-a-catalog-of-every-single-thing-ever/258579/" mce_href="http://www.theatlantic.com/technology/archive/2012/06/inside-googles-plan-to-build-a-catalog-of-every-single-thing-ever/258579/">told Alexis last year</a>, of "going sideways through the web."</p><p>So then. Combine that resource with voice recognition and natural language processing, and you have a powerful new way to interact with knowledge and the people who make use of it. And then combine all <em>that</em> with new forms of mobile hardware -- <a href="http://livingthruglass.com/howto-google-glass-voice-commands/" mce_href="http://livingthruglass.com/howto-google-glass-voice-commands/">voice-controlled Google Glass</a>, for example -- and you have a whole interface, a whole new way to make sense of the world as you navigate it. Google Now, Alexis <a href="http://www.theatlantic.com/technology/archive/2012/10/the-world-is-not-enough-google-and-the-future-of-augmented-reality/264059/" mce_href="http://www.theatlantic.com/technology/archive/2012/10/the-world-is-not-enough-google-and-the-future-of-augmented-reality/264059/">put it when the service debuted</a>, "is the base layer of the Glass video, or of any Google AR future. It's the servant that trains itself. It's the bot that keeps you from having to use your big clumsy thumbs." So the search box on a page -- that elegant little object that made Google what it is today -- will soon be, Google is betting, surpassed by something more intuitive and human-friendly. It'll be a shift from search box to voice box, from query to conversation. The layer of information that has hummed in the background of our computer screens and our smartphones and our lives, Google believes, will soon be brought to the fore, delivered by a friendly personal assistant who is smart and always hard-working and who knows you almost as well as you know yourself. And whose name, in this case, is Google.</p>

That kind of interaction, for the most part, has only recently been a possibility. While, sure, we've had incremental steps along the way -- Dragon dictation software, Siri and her predecessors -- engineers have only recently had the tools to convert "voice" into "interface." "It's really the first time in history," Huffman says, that the necessary technological elements have come together to allow a person to talk to a computer -- and, by extension, to talk to the stuff that the computer implicitly contains. First, you needed voice recognition capability that could keep up with the idiosyncrasies and the speed of typical speech. Then, you needed the intelligence aspect -- natural language processing and understanding -- that could convert sounds into words into meaning. Finally, you needed a knowledge graph: the interconnected and structured data that could offer people the answers they were looking for.

In 2010, Google acquired Metaweb, a firm that maintained "an open database of things in the world." Freebase, at the time, offered information about more than 12 million of those things, including "movies, books, TV shows, celebrities, locations, companies and more." Under Google, it now offers much more. And the database has provided Google, in turn, with interconnected and structured data that along with Wikipedia and other sources now inform Google's Knowledge Graph -- a way for Google to apply the logic of the semantic web to everyday consumer products. It's a way of knowing things, even if you don't know the right questions to ask. It's a way, Metaweb founder told Alexis last year, of "going sideways through the web."

So then. Combine that resource with voice recognition and natural language processing, and you have a powerful new way to interact with knowledge and the people who make use of it. And then combine all that with new forms of mobile hardware -- voice-controlled Google Glass, for example -- and you have a whole interface, a whole new way to make sense of the world as you navigate it. Google Now, Alexis put it when the service debuted , "is the base layer of the Glass video, or of any Google AR future. It's the servant that trains itself. It's the bot that keeps you from having to use your big clumsy thumbs." So the search box on a page -- that elegant little object that made Google what it is today -- will soon be, Google is betting, surpassed by something more intuitive and human-friendly. It'll be a shift from search box to voice box, from query to conversation. The layer of information that has hummed in the background of our computer screens and our smartphones and our lives, Google believes, will soon be brought to the fore, delivered by a friendly personal assistant who is smart and always hard-working and who knows you almost as well as you know yourself. And whose name, in this case, is Google.