The Dangers of Automating Our Lives

"Nicholas Carr at the Telecosm Conference in 2008 crop" by Sandy Fleischmann - originally posted to Flickr as SDIM8501 Telecosm conference, Nicholas Carr. Licensed under CC BY 2.0 via Wikimedia Commons.

It’s one of the most seductive, and widespread, promises of our age: technology that handles all tedious details and frees up us humans to think big thoughts, or just relax.

Examples range from the most complex tasks (think of a jumbo jet on autopilot) to the most commonplace activities of our daily lives (think of relying on your smartphone for directions to a new restaurant — or, someday, relying on your self-driving car instead).

Writer Nicholas Carr scrutinizes the side effects of these delightful-sounding developments in his new book, The Glass Cage: Automation and Us. Sure, automation has upsides, he writes, but “we’re not very good at thinking rationally about automation or understanding its implications.”

Those include losing skills we hand over to machines (often with mixed results at best), and in the process losing some of what makes us human. Among other examples, he suggests that we may outsource so much thinking to our gadgets that we suffer from what Carr calls “information underload.”

The Glass Cage isn’t anti-tech — notably, it has some surprisingly upbeat observations about computer and video games — it’s anti-mindlessness. Carr’s ultimate (and ultimately optimistic) message is a call for designing technologies with more careful thought toward making sure they’re really working for us, and we’re not adjusting to them. Here’s a lightly edited version of my exchange with Carr about the book and its lessons.

You write that on typical passenger flights, the pilot “holds the controls for a grand total of three minutes,” during takeoff and landing. The rest of the flight is spent “checking screens and punching in data.” This can cause actual flying skills to deteriorate to the point where a pilot may screw up if the automated system goes haywire.

On the other hand, you note that “fatal airline crashes have become exceedingly rare” — and data seems to back that up. If the bottom line is that a more automated system is a significantly statistically safer system, then why should we worry?

Automation isn’t an event, where theres a clear “before” and “after.” It’s a process, through which we continually renegotiate the division of labor between ourselves and our machines.

So while advances in flight automation have over the decades improved safety overall, recent evidence suggests that we’ve gone too far — that we’ve shifted too much control and responsibility from the pilot to the computer. The overdependence on autopilots and other automated systems has been implicated as a cause or contributing factor in many recent crashes and dangerous incidents.

Last year, the Federal Aviation Administration issued an extensive report on cockpit technology that warned that pilots are making mistakes because theyre not getting enough practice in manual flying. When automated systems fail or a plane runs into a situation the computers cant handle, the degradation of pilot skills can spell disaster.

The more general lesson is that even if some degree of automation is good for a particular task, that doesnt mean that more automation is necessarily better. As computers become more capable, we cant lose sight of the fact that technology is never perfect. The skills of human beings still matter. At every stage of automation, we need to make sure that we strike the right balance in dividing roles and responsibilities between people and computers.

We dont seem to have done that well in aviation recently, and returning a little more responsibility to the pilot may actually be the best way to make flying even safer than it already is.

You offer examples of automation affecting other professions in similar skill-softening ways — architects, doctors, lawyers, accountants, Wall Street traders, even coders. Often this is a result of what you call “information underload,” a fascinating and counterintuitive term suggesting that the more we automate, the less we learn and think. Does that have effects outside of our work?

My last book, The Shallows, was about the dangers of information overload, so one of the surprises in doing the research for The Glass Cage was discovering that information underload carries equal risks.

“Underload” just means that a person doesn’t have enough to do, isn’t called on enough to interpret information, make decisions, and in general do challenging work. When that happens, the person loses situational awareness — spaces out, essentially — and doesn’t get the practice necessary to develop and sharpen talents.

Were seeing that in a lot of jobs, particularly as software takes over analytical and judgment-making tasks, but I think we’re seeing it in our personal lives as well. If you call on a GPS system to guide you from one place to the next, for example, you experience information underload in navigation. You don’t develop your own navigational sense — one of the most fundamental skills for any living creature — and I would argue that you even begin to lose your sense of place.

As we look to apps and other software to manage ever more aspects of our day-to-day lives, I fear we’ll see a general deterioration in skills, which will result in an even greater dependence on technology. We’ll increasingly become wards of our gadgets and the companies that program them.

Technology companies seem bent on giving us things that “just work” — seamlessly, and without our really needing to understand how they work. “We pick the program or device that lightens our load and frees up our time,” you write, “not the one that makes us work harder and longer.”

But you point to an interesting — and to many, I would guess, surprising — exception: games. What can we learn from the way many games are designed?

The best video games are all about developing the talents of the players. Players face a series of challenges, each a little more difficult than the last. They’re required to repeat a task until they’ve mastered it. They’re deeply engaged in what they’re doing. They’re given immediate feedback on their performance.

Those characteristics match up pretty much precisely with what psychologists have discovered to be the crucial elements of skill building: practice in a variety of situations, deep engagement, regular feedback.

Most of the software we use takes the opposite approach. It’s designed to relieve us of effort, of practice, of the need for attentiveness and engagement. It’s basically engineered to ensure we don’t face tough challenges and hence don’t hone our skills.

We don’t need to be experts at everything, of course. But we do need to develop rich talents, both to perform our work well and to live fulfilling, interesting lives.

You don’t spend much time on the idea that the march of artificial intelligence is “summoning the demon” that will destroy humanity, as Elon Musk recently worried aloud. And he’s not the only smart person to frame the issue in apocalyptic, sci-fi terms; it’s become an almost trendy fear. What do you make of that?

It’s probably overblown. All those apocalyptic AI fears are based on an assumption that computers will achieve consciousness, or at least some form of self-awareness. But we have yet to see any evidence of that happening, and because we don’t even know how our own minds achieve consciousness, we have no reliable idea of how to go about building self-aware machines.

There seem to be two theories about how computers will attain consciousness. The first is that computers will gain so much speed and so many connections that consciousness will somehow magically “emerge” from their operations. The second is that we’ll be able to replicate the neuronal structure of our own brains in software, creating an artificial mind.

Now, it’s possible that one of those approaches might work, but there’s no rational reason to assume they’ll work. They’re shots in the dark. Even if we’re able to construct a complete software model of a human brain — and that itself is far from a given — we can’t assume that it will actually function the way a brain functions. The mind may be more than a data-processing system, or at least more than one that can be transferred from biological components to manufactured ones.

The people who expect a “singularity” of machine consciousness to happen in the near future — whether it’s Elon Musk or Ray Kurzweil or whoever — are basing their arguments on faith, not reason. I’d argue that the real threat to humanity is our own misguided tendency to put the interests of technology ahead of the interests of people and other living things.

Yes, your ultimate message seems to be that as we shape our tools and build our systems, we need to give people precedence and “to humanize technology,” so that our skills are enhanced instead of deadened. “That requires vigilance and care,” you observe.

This makes me wonder about the book’s intended audience. It seems that the business sector intentionally moves as aggressively as it can — “move fast and break things” sounds like the opposite of vigilance and care — and the policy world can hardly keep up. Meanwhile, I suspect many of the rest of us feel like we can’t do much more than react. So to whom are you most ideally speaking when you make this argument?

Anyone who cares to listen, I guess. But my main audience are people like myself: people who are enthusiastic about computers but who also worry about the ill effects of becoming dependent on software, of having everything they do mediated by software and software companies.

I do believe that, as individuals and as a society, we can make wise or unwise decisions about how we use technology. I certainly don’t think we should cede those decisions to companies like Facebook or Amazon or Google.

The secondary audience are the programmers and system designers themselves. The problems I explore aren’t inherent to software. They reflect the way the software is designed. It’s fully in our power to design humane software.

Write to me at rwalkeryn@yahoo.com or find me on Twitter, @notrobwalker. RSS lover? Paste this URL into your reader of choice: https://www.yahoo.com/tech/author/rob-walker/rss.