Self-Perfecting Machines

Self-Perfecting Machines

“… we are beginning to depend on computers to help us evolve new computers that let us produce things of much greater complexity. Yet we don’t quite understand the process—it’s getting ahead of us. We’re now using programs to make much faster computers so the process can run much faster. That’s what’s so confusing—technologies are feeding back on themselves; we’re taking off. We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.” —Danny Hillis, founder of Thinking Machines, Inc.

You and I live at an interesting and sensitive time in human history. By about 2030, less than a generation from now, it could be our challenge to cohabit Earth with superintelligent machines, and to survive. AI theorists return again and again to a handful of themes, none more urgent than this one: we need a science for understanding them.

READ MORE Inside ‘Labor Day’s Sexy Pie Scene

Fortunately, someone has laid the foundation for us:

Surely no harm could come from building a chess playing robot, could it?… such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

READ MORE The Week’s Best Reads

This paragraph’s author is Steve Omohundro.

Tall, fit, energetic, and pretty darn cheerful for someone who’s peered deep into the maw of the intelligence explosion, he’s got a bouncy step, a vigorous handshake, and a smile that shoots out rays of goodwill. He met me at a restaurant in Palo Alto, the city next to Stanford University, where he graduated Phi Beta Kappa on the way to U.C. Berkeley and a Ph.D. in physics. He turned his thesis into the book Geometric Perturbation Theory in Physics on the new developments in differential geometry. For Omohundro, it was the start of a career of making hard things look easy.

READ MORE Hollywood's First Sexplosion

He’s been a highly regarded professor of artificial intelligence, a prolific technical author, and a pioneer in AI milestones like lip reading and recognizing pictures. He co-designed the computer languages StarLisp and Sather, both built for use in programming AI. He was one of just seven engineers who created Wolfram Research’s Mathematica, a powerful calculation system beloved by scientists, engineers, and mathematicians everywhere.

Omohundro is too optimistic to throw around terms like catastrophic or annihilation, but his analysis of AI’s risks yields the spookiest conclusions I’d heard of yet. He does not believe, as many theorists do, that there are a nearly infinite number of possible advanced AIs, some of them safe. Instead, he concludes that without very careful programming, all reasonably smart AIs will be lethal.

READ MORE Should Neo-Nazis Speak Freely?

“If a system has awareness of itself and can create a better version of itself, that’s great,” Omohundro told me. “It’ll be better at making better versions of itself than human programmers could. On the other hand, after a lot of iterations, what does it become? I don’t think most AI researchers thought there’d be any danger in creating, say, a chess-playing robot. But my analysis shows that we should think carefully about what values we put in or we’ll get something more along the lines of a psychopathic, egoistic, self-oriented entity.”

The key points here are, first, that even AI researchers are not aware that seemingly beneficial systems can be dangerous, and second, that self-aware, self-improving systems could be psychopathic.

READ MORE Joyce Carol Oates Goes to War

Psychopathic?

For Omohundro the conversation starts with bad programming. Programming mistakes that have sent expensive rockets corkscrewing earthward, burned alive cancer patients with radiation overdoses, and left millions without power. If all engineering were as defective as a lot of computer programming is, he claims, it wouldn’t be safe to fly in an airplane or drive over a bridge.

READ MORE Henry James’s Suppressed Sex Farce

The National Institute of Standards and Technology found that each year bad programming costs the U.S. economy more than $60 billion in revenue. In other words, what we Americans lose each year to faulty code is greater than the gross national product of most countries. “One of the great ironies is that computer science should be the most mathematical of all the sciences,” Omohundro said. “Computers are essentially mathematical engines that should behave in precisely predictable ways. And yet software is some of the flakiest engineering there is, full of bugs and security issues.”

Is there an antidote to defective rockets and crummy code?

READ MORE How I Write: James Patterson

Programs that fix themselves, said Omohundro. “The particular approach to artificial intelligence that my company is taking is to build systems that understand their own behavior and can watch themselves as they work and solve problems. They notice when things aren’t working well and then change and improve themselves.”

Self-improving software isn’t just an ambition for Omohundro’s company, but a logical, even inevitable next step for most software. But the kind of self-improving software Omohundro is talking about, the kind that is aware of itself and can build better versions, doesn’t exist yet. However, its cousin, software that modifies itself, is at work everywhere, and has been for a long time. In artificial intelligence parlance, some self-modifying software techniques come under a broad category called “machine learning.”

READ MORE How To Be a Man

When does a machine learn? The concept of learning is a lot like intelligence because there are many definitions, and most are correct. In the simplest sense, learning occurs in a machine when there’s a change in it that allows it to perform a task better the second time. Machine learning enables Internet search, speech and handwriting recognition, and improves the user experience in dozens of other applications.

“Recommendations” by e-commerce giant Amazon uses a machine-learning technique called affinity analysis. It’s a strategy to get you to buy similar items (cross-selling), more expensive items (up-selling), or to target you with promotions. How it works is simple. For any item you search for, call it item A, other items exist that people who bought A also tend to buy—items B, C, and D. When you look up A, you trigger the affinity analysis algorithm. It plunges into a vast trove of transaction data and comes up with related products. So it uses its continuously increasing store of data to improve its performance.

READ MORE Lost Sappho Poems Found

Who’s benefiting from the self-improving part of this software? Amazon, of course, but you, too. Affinity analysis is a kind of buyer’s assistant that gives you some of the benefits of big data every time you shop. And Amazon doesn’t forget—it builds a buying profile so that it gets better and better at targeting purchases for you.

What happens when you take a step up from software that learns to software that actually evolves to find answers to difficult problems, and even to write new programs? It’s not self-aware and self-improving, but it’s another step in that direction—software that writes software.

READ MORE This Week’s Hot Reads

Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve. It’s also used to write innovative, high-powered software.

It’s different in important ways from more common programming techniques, which I’ll call ordinary programming. In ordinary programming, programmers write every line of code, and the process from input through to output is, in theory, transparent to inspection.

READ MORE Sex, AIDS, and Survival

By contrast, programmers using genetic programming describe the problem to be solved, and let natural selection do the rest. The results can be startling.

A genetic program creates bits of code that represent a breeding generation. The most fit are crossbred—chunks of their code are swapped, creating a new generation. The fitness of a program is determined by how closely it comes to solving the problem the programmer set out for it. The unfit are thrown out and the best are bred again. Throughout the process the program throws in random changes in a command or variable— these are mutations. Once set up, the genetic program runs by itself. It needs no more human input.

READ MORE When the Sixties Came Unhinged

Stanford University’s John Koza, who pioneered genetic programming in 1986, has used genetic algorithms to invent an antenna for NASA, create computer programs for identifying proteins, and invent general purpose electrical controllers. Twenty-three times Koza’s genetic algorithms have independently invented electronic components already patented by humans, simply by targeting the engineering specifications of the finished devices—the “fitness” criteria. For example, Koza’s algorithms invented a voltage-current conversion circuit (a device used for testing electronic equipment) that worked more accurately than the human-invented circuit designed to meet the same specs. Mysteriously, however, no one can describe how it works better—it appears to have redundant and even superfluous parts.

But that’s the curious thing about genetic programming (and “evolutionary programming,” the programming family it belongs to). The code is inscrutable. The program “evolves” solutions that computer scientists cannot readily reproduce. What’s more, they can’t understand the process genetic programming followed to achieve a finished solution. A computational tool in which you understand the input and the output but not the underlying procedure is called a “black box” system. And their unknowability is a big downside for any system that uses evolutionary components. Every step toward inscrutability is a step away from accountability, or fond hopes like programming in friendliness toward humans.

READ MORE Is Sherlock a Good Detective?

That doesn’t mean scientists routinely lose control of black box systems. But if cognitive architectures use them in achieving AGI, as they almost certainly will, then layers of unknowability will be at the heart of the system.

Unknowability might be an unavoidable consequence of self-aware, self-improving software.

READ MORE Beethoven in Love

“It’s a very different kind of system than we’re used to,” Omohundro said. “When you have a system that can change itself, and write its own program, then you may understand the first version of it. But it may change itself into something you no longer understand. And so these systems are quite a bit more unpredictable. They are very powerful and there are potential dangers. So a lot of our work is involved with getting the benefits while avoiding the risks.”

Back to that chess-playing robot Omohundro mentioned. How could it be dangerous? Of course, he isn’t talking about the chess-playing program that came installed on your Mac. He’s talking about a hypothetical chess-playing robot run by a cognitive architecture so sophisticated that it can rewrite its own code to play better chess. It’s self-aware and self-improving. What would happen if you told the robot to play one game, then shut itself off?

READ MORE The Week’s Best Reads

Omohundro explained, “Okay, let’s say it just played its best possible game of chess. The game is over. Now comes the moment when it’s about to turn itself off. This is a very serious event from its perspective because it can’t turn itself back on. So it wants to be sure things are the way it thinks they are. In particular it will wonder, ‘Did I really play that game? What if somebody tricked me? What if I didnt play the game? What if I am in a simulation?’”

What if I am in a simulation? That’s one far-out chess-playing robot. But with self-awareness comes self-protection and a little paranoia.

READ MORE Vince Lombardi’s Sleepless Night

Omohundro went on, “Maybe it thinks it should devote some resources to figuring out these questions about the nature of reality before it takes this drastic step of shutting itself off. Barring some instruction that says don’t do this, it might decide it’s worth using a lot of resources to decide if this is the right moment.”

“How much is a lot of resources?” I asked.

READ MORE The Bioterrorist Who Loved Mahler

Omohundro’s face clouded, but just for a second.

“It might decide it’s worth using all the resources of humanity.”

READ MORE The Test Tube Generation

[This is an excerpt from Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat. Get it at Amazon.]

Related from The Daily Beast

Like us on Facebook - Follow us on Twitter - Sign up for The Cheat Sheet Newsletter