When Will Smartglasses and Other Wearable Computers Hit the Mainstream?

Google has stoked our collective imagination via relentless promotion of its Google Glass wearable computer in recent months. Thanks to a campaign of Web videos, trade show appearances and blog posts, the search giant has positioned its smartglasses as a hands-free augmented reality gadget that will allow us to share our personal experiences in real time, whether we are skydiving, skiing or handling snakes.


Not surprisingly, a number of competitors have emerged, promising many of the same capabilities. Before augmented-reality eyewear can move into the mainstream, however, Google and its ilk must address some fundamental shortcomings in smartglass technology, including bulky designs, high prices and a dearth of software that would enable them to be more than head-mounted camera phones.


Google should aim for a headset with see-through lenses that allows the wearer to look straight ahead rather than at a small screen off to the side. So says Justin Rattner, a man who, along with his colleagues at chipmaker Intel, spends a lot of time engineering the future of computing technology. Rattner, an Intel Senior Fellow, serves as director of Intel Labs and as the company’s chief technology officer.


Scientific American spoke with him about why smartglasses are getting so much attention, how they should be improved and why Star Trek’s tricorders were a misguided interpretation of the future.


[An edited transcript of the interview follows.]



Wearable computers have been around for decades, including ring scanners that factory workers wear on their fingers to read barcodes and even prototype head-mounted units that enable augmented reality. Why is the technology getting so much attention now?

The sensor technology, the communications technology and the computer technology have all reached a point where in some sense for the first time the potential for high-volume consumer wearables is real. That’s what’s new. Today you can put essentially everything that’s in a smartphone into a set of eyeglasses, although they would be a bit heavy. That potentially becomes an interesting platform for communications.


Why do you say “potentially”?

We think there is a grand challenge when it comes to eyewear. No one’s been able to demonstrate a high-performance see-through display. This side-view display that you see in Google Glass and in the Oakley Airwave snow goggles is, in some sense, a recognition of the fact that no one has solved the transparent display problem, even though there are any number of people working on it.


[Such a high-performance see-through display would have] an optical engine for the left and right eyes that would project images into the lenses. The display would be constructed in such a way that it keeps the virtual images in front of you, regardless of where you turn your head or your gaze. It would be a perfect overlay, and you would see right through the projected images. So if you’re doing augmented reality and walking in New York, you will see, “Empire State Building” or “Statue of Liberty” hovering over those physical objects.


There’ve been a handful of companies, such as Lumus, that have had these kinds of technologies for a long time, but typically they’ve either been strictly in development or sold for research in augmented reality. Generally speaking, see-through displays have been too bulky, too heavy and too dim to bring to market. They’re fine indoors, but you walk outside and the virtual image is completely washed out.


Lumus’s see-through, wearable displays resemble a larger-than-normal pair of sunglasses and project augmented-reality images onto the sides of the lenses. As the images travel to the center of the lenses, they are reflected into the eyes, giving the wearer the impression of seeing the images on a large screen in front of them. Why can’t these see-through augmented-reality glasses be made more like a regular pair of eyeglasses or sunglasses?

It really requires a very high level of optical engineering to do it right. You have what Lumus has done. In addition to projections systems, others were going to take compact lasers and project the image directly onto the retina. The people working on these technologies have mostly been small, underfunded start-ups or people who are interested only in the optics and not in actually building a complete [smartglasses] product.


What must Google and other companies building these head-mounted, wearable computers do for their devices to become mainstream?

They need to offer an interface that works the way people naturally move and interact. The use of side screens, like what you’re seeing with Google Glass, is basically the admission that they don’t know how to provide a truly augmented Terminator-like view—where the person wearing the headset can see annotated objects or streaming data while seeing through to the physical world. That’s where everybody wants to be as soon as possible.


The way they’re designed, Google Glass is not at all interesting for gaming or for watching movies. But if you could actually project a high-quality image or game or video in your normal field of view and have it respond instantly to changes in your head position, then telepresence becomes really amazing because to your brain you’re essentially in that space.

How close are engineers to achieving this Terminator-like view?

There are a few devices out there that are okay for a 30- or 40-degree field of view, but when you think about what it will take for consumers to be interested, they’re going to want about 60, 70 or more degrees [in the] field of view. You don’t want this thing to be this little box in the center of your field of view. It could get very annoying, very quickly.


At what point does augmented-reality technology become integrated into the body itself?

One of our university labs actually did some work on an implanted neural interface chip—[which interacts with the brain]—that was powered wirelessly. It could be placed subcutaneously [under the skin] and never had to be removed because the power came from a small induction coil that was on the outside of the person’s skin. These things were fractions of an inch apart, and you could power what was designed as a very low-power device. But that’s just nibbling at the edges of it.


In addition to figuring out the display technology needed for wearable computing, what other challenges are there?

There are other issues, particularly if you want the virtual world and the real world to move together. That’s basically a latency problem. The graphics you see on a computer screen and television screen are running in a deeply pipelined design—[there’s a pipeline of continuous images that flow to the screen]. Frames are being processed at 30 or 60 per second to deliver moving images. But what happens when your viewing angle changes, as it would when you’re wearing [an augmented-reality] headset? Now you have to drain the pipeline of the previous images from your previous perspective and start refilling the pipeline with images from your new point of view. When this isn’t done fast enough, you get an effect known as “jutter”. You can see it sometimes in the movies or on TV when a camera moves quickly—the images on screen seem to stutter through a series of visual jumps before landing in the right position.


So, not only do you have to have a new display, you have to invent a new graphics architecture that doesn’t have this latency problem. For example, if I look at one person, I’m collecting data about what I’m seeing. When I turn my head to look at someone or something else, all of the data related to that first person is gone, and I’m gathering data on the new image I’m seeing. One solution is to shut off the current image as soon as the headset detects that the wearer’s head is moving. The display returns when the new angle of view is reached. The problem is that we move our heads so often the user would have to learn to tolerate such a display. Otherwise, they’d be sick in an instant.


Hopefully before the decade is out we will get a solution to the graphics architecture that makes it jutter-free. In some sense I think the rest of the electronics is almost trivial. Building a smartphone-glass computing system for such a headset is really not going to be that difficult. It’s well within what we can do today.


People will argue whether the Google approach has consumer potential or not. I think it’s a far cry from what I think people really, really want.


Are there wearable systems that are delivering what people want?

One of our favorite ones is Looxcie, which is a little clip on video camera. That notion of continuously recording your life is a pretty interesting one. It’s a way to document your day.


What you really want, though, is good face recognition so when you go to that cocktail party or you walk into a bar, a little thing in your ear says, “Hey, that’s Bob Jones, your high school buddy.” You can walk right up to that person and say, “Oh, Bob, hey, how are you doing? So good to see you.” I guarantee you’ll sell a million of those in about a week.


What are some other uses for these portable augmented-reality systems?

You begin to think of using augmented reality with biosensors, something like what the Tricorder XPRIZE is pursuing. Although everybody gets excited when they hear the word “tricorder,” [the scanner device featured in Star Trek that doctors used to instantly diagnose patients, the show’s characters] could have had their clothing instrumented to collect all of their vital signs with something like a smartphone [with an augmented-reality app]. They wouldn’t need to wave some magic wand over somebody to tell them what was wrong. Your Star Trek jumpsuit should have been reporting that information [to your phone] in real time.

Follow Scientific American on Twitter @SciAm and @SciamBlogs. Visit ScientificAmerican.com for the latest in science, health and technology news.
© 2013 ScientificAmerican.com. All rights reserved.