Yes, Robots Are Coming for Our Jobs Now What?

Fifteen years ago Deep Blue beat Garry Kasparov in a game of chess, marking the beginning of what Massachusetts Institute of Technology economist Erik Brynjolfsson calls the new machine age—an era driven by exponential growth in computing power. Lately, though, people have been feeling uneasy about the machine age. Pundits and experts seem to agree that the robots are definitely taking our jobs. At last week’s TED conference, Brynjolfsson argued that the new machine age is great for economic growth, but we still have to find a way to coexist with the machines. We asked him to expand on a few points.

[An edited transcript of the interview follows.]

You’ve written a lot about what you call the new machine age, which you argue is fundamentally different from previous industrial eras. What’s so different about it?

The first and second industrial revolutions were defined by these general-purpose technologies, the steam engine and, later, electricity. The new machine age is defined by digital technologies, and those have a lot of unusual characteristics. First, you can reproduce things at close to zero marginal cost with perfect quality and almost instant delivery—that’s something you can’t do with atoms. Second, computers are getting better faster than anything else—ever. That’s something we’re not used to dealing with, and it’s happening year over year, relentlessly. Third, you can remix or combine technologies in a way that doesn’t use them up, but that instead allows for even more combinations. That means we’re in no danger of running out of ideas. Put those all together, and I’m very optimistic about the future of economic growth and productivity growth. And the data bear that out.

You’ve also said that productivity has become “decoupled” from employment. Can you explain?

Throughout most of modern history, productivity and employment have grown side by side. But starting about 15 years ago they started becoming decoupled. Productivity continued to grow, even accelerate, but employment stagnated and even fell, as did median wages for the people who were still working. This was an important milestone, because most economists, including me, used to be of the mind-set that if you just keep increasing productivity, everything else kind of takes care of itself.

But there’s no economic law that says everyone has to benefit equally from increased productivity. It’s entirely possible that some people benefit a lot more than others or that some people are made worse off. And as it turns out, for the past 10 to 15 years it’s gone that way. The pie has gotten bigger but most of the increase in income has gone to less than 1 percent of the population. Those at the 50th percentile or lower have actually done worse in absolute terms.

There are a lot of causes for this—some of it has to do with offshoring, tax policy and so on—but those are minor players compared with the big story, which is the nature of technology. It’s simultaneously allowing us to grow faster and leading us to a very different allocation of those benefits.

Exactly how is technology shifting the landscape of jobs and wealth—who wins and who loses?

In our book Race against the Machine [written with Andrew McAfee, principal research scientist at the Center for Digital Business in the M.I.T. Sloan School of Management], we describe three sets of winners and losers. The first is skilled versus less skilled workers, as a result of what’s called skill-biased technical change. As technology advances, educated workers tend to benefit more, and workers with less education tend to have their jobs automated. It’s not a perfect correlation, but there is a correlation.

The second is called capital-biased technical change. The share of income going to capital owners has grown relative to the share of income going to labor providers. It makes some intuitive sense that when you replace human workers in a factory with robots, the capital owners will earn a bigger share of the income from that factor. That’s been happening at an accelerating pace in recent years. You may be surprised to hear that for most of the 20th century that did not happen. In fact, it didn’t really happen until about 15 years ago.

The third change might be the most important one: It’s called superstar-biased technical change, or talent-biased technical change. If somebody has talent or luck in creating something that people want, that thing can be replicated with digital technology much more easily than in the past. Think of someone who writes software. You can take that talent or luck and replicate it a million times. And while the person who created it does very, very well, the people who previously did that job are less important or maybe not even necessary. The example that I gave in my TED talk was TurboTax. You’ve got a human tax preparer being replaced by a $39 piece of software. It makes the pie bigger in the sense that you create more value with less effort, but most of that value goes to a very small group of people.

I do find it surprising that what you call capital-biased technical change didn’t take off until 15 years ago. Computers aren’t that new, nor are factory robots. What has changed?

I should be clear: Technology has always been creating and destroying jobs. Automatic threshers replaced 30 percent of labor force in agriculture in the 19th century. But it happened over a long period of time, and people could find new kinds of work to do. This time it’s happening faster. And technological change is affecting more industries simultaneously. Threshing changed agriculture, but digital technology is affecting pretty much every industry. Finally, these technologies are just qualitatively different. The ability to digitize makes it more valuable to be a creator and less valuable to be someone who carries out instructions or replicates stuff. You don’t need to pay people to handcraft each copy of TurboTax. That’s different than, say, an automobile—at least for the time being.

In a way, we were just kind of lucky during the 19th and 20th centuries that technology evolved in a way that helped people at the middle of the income distribution. There’s no economic law that says technology has to work that way. And as it happens, a set of technologies that don’t work that way are becoming very important right now.

As you know, many people are underwhelmed by the benefits those technologies have brought us. Here’s something you hear a lot: In the 20th century we put people on the moon. In the 21st century we got Facebook. What do you make of that sentiment?

Even if you go by industrial-age metrics—amount of stuff produced— productivity has been doing great. But if anything, I think that’s an underestimate, because both psychologically and in the data we tend to underweight things that are made of bits versus things that are made of atoms. A rocket blasting off looks really big and impressive. Facebook, which you use to connect with your grandmother, maybe doesn’t look as impressive.

Yet in terms of utility, which is what economists care about, you could make the case that Facebook has made more people happier. People seem to be voting with their hours. They’re spending time communicating with friends and family—showing pictures of their babies or dogs. And I’m not in a position to say, no, that’s an unworthy type of happiness. I’m going to go by what people choose to do. In fact, when we do research on where people are spending their time and what they’re doing, we find that there’s about $300 billion of unmeasured value in all these free goods on the Internet—Wikipedia, Facebook, Google, free music—that don’t get counted in GDP statistics.

But what do we do about the decoupling of productivity and employment? If technology creates a class of permanently underemployed people, the social effects could be awful, and a lot of people are very worried about it.

The first step is to diagnose it correctly—to understand why the economy is changing and why people aren’t doing as well as they used to. We also need to think about inventing new kinds of organizations that work in this new culture. There are a few examples, most of them relatively small, like oDesk or Etsy or Foldit. Foldit is a game you can play on the Web. Humans have very good visual cortexes and are able to identify ways that proteins fold that computers can’t. One of the things you need to do in biomedicine is understand how a particular sequence of amino acids codes for a particular protein shape, and it turns out that computers can’t do that—but humans are very good. It’s a more practical version of humans and computers playing chess together to beat other computers.

We need to unleash entrepreneurs to find more places where humans have capabilities that machines don’t have, and where the two of them working together can create more value than just the machines alone could—what we call racing with the machine. Just as they did a century ago when people were no longer needed on the farm, people came up with whole new industries. We’re not doing that as well as we could be and have to try to jump-start that.

Follow Scientific American on Twitter @SciAm and @SciamBlogs. Visit ScientificAmerican.com for the latest in science, health and technology news.
© 2013 ScientificAmerican.com. All rights reserved.