We Can’t Trust AI Until It Learns Some Common Sense

Photo Illustration by The Daily Beast
Photo Illustration by The Daily Beast

Imagine, if you will, a future where self-driving cars are much better than they are now. They reliably avoid obstacles, obey traffic lights and signs, and even respond to turn signals from other drivers. Autonomous carts are universally deployed at golf courses, movie lots, and residential communities, and self-driving vans fully service urban commuters in many cities. They have not caused an injury accident in years. In fact, these imagined cars are so safe and dependable that they don’t even need human drivers as backup.

The technology has reached the first wave of consumers, and many of your neighbors now have self-driving cars in their driveways. You’ve joined them. Your new car is amazing, better than you at driving in adverse conditions like blizzards, changing lanes in heavy traffic, and navigating through intersections crowded with pedestrians, dogs, strollers, bicycles, and other vehicles. You just set the destination, and off it goes. Remarkably, many stores now provide curbside spots for cars like yours to have their trunks loaded up and sent on their way home—no human needed in the car at all.

The Self-Driving Car Is an Idea That Badly Needs a Tune-Up

Now picture this:

It’s just before noon on Monday, and you send your car on a shopping run to Jones’s Grocery Store. Today is the national Independence Day holiday, and Jones’s is having a sale on its great steaks, which you’re planning to barbecue this afternoon. You expect the trip to be quick because so many people are at the beach for the holiday and the store is not far away. The car approaches the intersection of Bradford and Victoria. The traffic light is red so the car glides smoothly to a stop and waits for the light to turn green. Three minutes go by, and the light stays red. Five minutes go by, and the light is still red. The car’s cameras detect some activity on the far side of the intersection, but no action is suggested. Your car’s computers—excellent with obstacle avoidance and lane following—are not equipped to understand that drivers on the other side are getting out of their cars, talking to each other, and pointing. Your car’s new external audio sensors, trained to detect honking and sirens, cannot make any sense of a somewhat distant musical sound coming from the right.

The car’s navigation system has no rerouting suggestions; there is no traffic to speak of, and no known construction or accidents along the route. So the car sits there, waiting patiently for the light to change. Ten minutes go by, and then fifteen.

Meanwhile, at home, you begin to wonder why the car hasn’t reported picking up the groceries. You know if it had a mechanical problem it would text you an alert. You check your app and see the car waiting at the intersection, and note that it hasn’t moved in a quarter hour. Yet nothing is amiss. Your remote monitor shows that the car is just obeying the law, waiting for the light to turn green. Why isn’t it doing something? You throw your hands up, bemoaning the fact that while your car drives like a pro, it occasionally makes these inexplicable blunders—like waiting forever at a traffic light while your guests are about to arrive.

<div class="inline-image__credit">The MIT Press </div>
The MIT Press

Now imagine an alternate scenario. It’s you driving this time. Same inter- section, same stuck light, same five minutes gone by:

You ask yourself, What on earth is going on? Should I wait a bit longer just in case? Drive through the red light? Make a right turn to get around the intersection? Get to the store some other way? Forget about Jones’s and go to another store? Give up on groceries altogether and head back home? (You can always order takeout.) You turn off the radio and ponder for a few seconds. The drivers on the other side of the intersection catch your eye. You look where they seem to be pointing. You see a bunch of flags in the distance and what looks like an open convertible with some people sitting atop the back seat. You hear some brassy-sounding music coming from that direction.

After a moment, you give up on Jones’s, turn the car around, and head off to a different store.

Think about what was going through your head as you figured out what to do. It’s not as if you remembered some specific rule that solved the problem for you. Certainly nothing you learned in driving school told you what action to take. There’s really no single driving move that is the right thing to do in cases like this. Maybe you’ve experienced broken traffic lights before and followed other drivers right through the intersection (with great caution, of course). But in this particular case, you figured that the light was staying red because a parade was coming on the cross street. “Oh yeah,” you remembered, “it’s Independence Day!” You could have turned around and tried another route, but that was either going to take too much time or run into the parade at a different intersection. So you decided to go elsewhere. You could have gone to Smith’s Grocery Store, which is closer to where you are, but the meat and produce there are nowhere as good. So you decided to head to Robert’s instead.

Lost Self-Driving Cars Plague Residents of Dead End San Francisco Street

You don’t have to be some sort of driving expert, like a cabbie with thou- sands of hours on the road, to come up with driving behavior like this. Maybe it’s not a learned routine you can just follow mindlessly while concentrating on something else like brushing your teeth or walking the dog. But it’s not rocket science either. No pencil and paper required. You have to be able to take into account some things about your current situation, but beyond that, it’s really just the ordinary common sense that any intelligent person would be expected to have.

An average adult will know many relevant, mundane things: you don’t just wait forever at a traffic light; people talking and gesticulating likely indicates something interesting going on; flags and brassy band music likely presage a parade. And they’ll put it all together quickly and make a fast, reasonable guess at what to do next: take the second-best choice if the first one no longer looks as promising. It’s just common sense. And your supposedly intelligent car, while an excellent driver—maybe even an out- standing one—clearly doesn’t have it.

If the example above strikes you as improbable, recent books and articles illustrate numerous instances of blunders in the current world of AI—the real world of AI today, not one of imaginary drives and imaginary stores. Purportedly self-driving cars make gaffes that no human driver would, like mistaking pictures of stop signs on billboards for the real thing. So-called “smart speakers” have told children to press pennies onto prongs of plugs in wall sockets. AI systems are fragile and ignorant of the world around them. While they are quite adept at some things, they can’t notice when something they are about to do or say is so off-base as to be preposterous. They break down in unfamiliar circumstances in oddly unpredictable ways because they lack the broad capacity to deal with the open-ended world that humans get from common sense.

So what is this common sense that most people have, and what would it take for machines like self-driving cars to have it too—how could a nonhuman driver become more like one of us? What it takes to exhibit common sense is quite different than what it takes to perform well at particular tasks, even ones that seem to demand intelligence in people, such as playing decent chess or diagnosing a blood infection. Only through an in-depth analysis of common sense in humans and a plan for how to build it into AI systems can the field finally get out of the incomplete corner it’s painted itself into. Among the most critical issues facing AI today are those of what common sense is, how it works, and what it might take to build computational systems that have it. This is the key to moving AI systems from the realm of idiots savants to that of robust, full-fledged intelligent agents. And into a future in which we can really trust them, because they can explain themselves, accept our advice on ways to avoid problems—and make sure that they always do something that makes sense.

Adapted excerpt from Machines like Us: Toward AI with Common Sense by Ronald J. Brachman and Hector J. Levesque, © 2022 Massachusetts Institute of Technology

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.