On a video screen projected to a crowd attending the United Nations’ summit Convention on Certain Conventional Weapons, a small drone whizzes past a tech executive. It shoots a projectile into the skull of a test dummy, detonating an explosive that could kill a human. In front of an audience, the executive pitches the drones as “unstoppable” and calls them capable of “an airstrike of surgical precision” that could render nuclear weapons obsolete.
The scene quickly cuts to the drones being hijacked by terrorist organizations and going on a killing spree, targeting politicians and social media activists. The film, titled Slaughterbots and produced by the Future of Life Institute, shows how easily autonomous weapons could become weapons of mass destruction.
In this week’s episode of Yahoo News’ Unfiltered, we talk to New School professor Peter Asaro about the dangers of artificial intelligence technology and autonomous weapons. In an age when AI plays a dominant role in pop culture, some may view Slaughterbots as science fiction, but for Asaro, it’s the not-so-distant future.
“We could wind up in situations where we have robots patrolling contested borders,” Asaro says. “Countries that start assassinating people around the world with small robots and drones, killing large numbers of a very specific set of people.”
“Time is running out,” he warns.
Asaro is also a co-founder of the International Committee for Robot Arms Control, a nonprofit organization that focuses on creating awareness about “the pressing dangers that military robots pose.” Along with Amnesty International and Human Rights Watch, the group is a member of the Campaign to Stop Killer Robots. For the past five years, the organization has been urging the United Nations and governments worldwide to establish policies that would preemptively ban lethal AI.
“The reality that we’re concerned with in the Campaign to Stop Killer Robots [is] autonomous weapons systems,” Asaro says. “These are weapons systems that militaries are developing right now that are capable of targeting and engaging weapons without any meaningful human control.”
The most well-known autonomous weapons used by the U.S. military are armed drones like the Predator, which has been used in such conflicts as Iraq and Afghanistan. Takeoff, landing, and autopilot are autonomous, but “when it comes to actually engaging the weapons systems, the humans have to be operating the system,” Asaro says.
For him and many others, the key issue is that one day a computer — using its own software and programming — could autonomously analyze the data to select and attack a human target. “Machines cannot be legal and moral agents,” Asaro says. “They could act unpredictably and cause harm to civilians. They could initiate or escalate conflicts. They could lead to arms races that could be regionally and globally destabilizing. They could be susceptible to hacking, which means they can also be taken over and redirected by third parties. So we could be attacked by some robot and not know who owns that robot or what country sent that robot out.”
Asaro believes that this technology could also lead to armies composed entirely of robots. “You no longer need a nation of people and a bunch of followers to raise an army … if you can just manufacture an army.”
The Campaign to Stop Killer Robots has been lobbying the U.N. to update the 1983 Convention on Certain Conventional Weapons treaty to include automated weapons. As of now, 26 countries support a full ban, including Iraq and China. The United States is not among them, however.
Asaro considers this is a major misstep. “The average citizen should be very concerned with this issue,” he says. “I think right now what we really need are citizens to go to their governments and say, ‘Look, there’s an easy solution. Enact this treaty and support its movement forward in the UN,’ and that’s something states can do quite easily at this point.
“Once these weapons are out there and we start to see all of these consequences … I think it’s going to be far more difficult to prohibit them.”