Why the world needs to regulate autonomous weapons

The Convention on Certain Conventional Weapons (CCW) at the United Nations has just concluded a second round of meetings in Geneva on lethal autonomous weapons systems. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races.

The central issue in the CCW discussions over lethal autonomous weapons is the necessity for human control over what the International Committee of the Red Cross has called the “critical functions” of targeting and engagement in attacks. AI could be used in various ways by militaries, including in weapons systems, and even in the critical functions of targeting and engagement. The issue is not what kind of technology is used or its sophistication, but whether and how the authority to target and engage is delegated to automated processes, and what implications this has for human responsibility and accountability, as well as human rights and human dignity.

If the nature of responsibility and accountability in the engage-ment of weapons erodes and shifts away from human control, a range of other critical issues arises. If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by other countries, resulting in destabilizing and costly arms races — regional and global. Given the nature of the technology, we can also expect to see these systems proliferate rapidly among countries and also spread to non-state actors.

Automated systems could initiate or escalate conflicts without human political or military decision-making or authority. Further-more, large-scale deployments of such systems will behave in intrinsically unpredictable ways, especially when they engage other such systems, raising questions about state responsibility.

We already see the military interest in speeding up battlefield decision-making and the shift from human-speed to machine- speed. Will accelerating military tactics to speeds at which humans cannot operate meaningfully mean that humans eventually lose control over military strategy as well? By empowering small groups of people — even individuals — to unleash massive levels of destruction and kill in great numbers, autonomous weapons could constitute a new kind of weapon of mass destruction.

Each of these issues could have powerfully negative effects on the number and intensity of conflicts around the world. Taken together, they would transform warfare into something new, unfamiliar and increasingly unpredictable. Will politics and diplomacy be able to keep up?

Insofar as these possible changes in the direction of autonomous warfighting are threats to human and state responsibility, they also threaten to undermine the rule of law itself. The question is fundamental: Can the global community of nations come together to agree not to pursue weapons that target and kill humans autonomously, or will they succumb to the relentless logic of striving for military advantage, and thereby sacrifice other values — diplomacy, rule of law, responsibility, human rights and dignity?

There are genuine disagreements over what, if anything, should be done to mitigate the risks of autonomous weapons. A handful of states argue that no new law or regulation is needed or that it is too early to regulate and we should wait to see how these weapons are used. Some propose soft law measures, such as a “political declaration” affirming the importance of retaining some form of human control over weapons systems and the use of force. But the majority of states are now proposing that new international law be negotiated on lethal autonomous weapons systems, including 26 states that seek a ban treaty.

The Campaign to Stop Killer Robots has from the beginning supported a comprehensive ban on the production and use of such weapons — a position that can also be viewed as a positive obligation requiring states to ensure that the targeting and engagement of weapons should always be kept under meaningful human control.

There are states that argue that existing international humanitarian law is sufficient to guide and regulate the develop-ment of these systems. They express confidence in both the clarity of existing legal norms and the mechanisms for enforcing them, such as weapons reviews required under the Geneva Conventions. However, there are serious risks that legal norms and the application of established concepts of humanitarian law could become fuzzier or weaker as technology advances and confidence in autonomous technology increases.

The proponents of lethal autonomous weapons argue that there could be real advantages to automating targeting and engagement, and that prohibitions or regulations could prevent beneficial applications of the technology. In a working paper submitted for the recent meeting, the United States offered a series of arguments that emerging technology, and AI in particular, could help to limit civilian casualties in armed conflict by helping the human decision-maker to make a better decision about the risks of engaging a target, or helping a weapon reach its intended target, or deactivate it if no target is found. These uses of automation, if properly employed, do have the potential to reduce the impact of certain attacks on civilians and conform to international law.

International humanitarian law encourages the reduction of risks to civilians in wartime. Military forces, in fact, have a duty to take reasonable precautions to protect civilians. But how do the laws of war, written for humans, apply to an automated system? What counts as “reasonable” for a machine?

As automation becomes more sophisticated and relied upon, it also becomes more important that the people who operate the automated systems understand how they use data and algorithms to make automated assessments. What are the essential aspects of human control? At what point might a system be “functionally delegated” the authority to kill, even when it has no such legal authority?

At the next meeting of the Convention on Certain Conventional Weapons in August, participants could agree to pursue the negotiation of a legally binding instrument that sets out clear standards. Or they might choose a lesser, non-binding political resolution. But the latter would not articulate how the issue of human control of weapons should be framed, or how it might be assessed or enforced.

It is up to the diplomats at the CCW and their governments to prove that they can work together to address the full range of threats to humanity posed by autonomous weapons.

Peter Asaro is an associate professor at the New School in New York City. His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and unmanned aerial vehicles. His article is edited from Bulletin of the Atomic Scientists, April 27, 2018, and was reprinted in PeaceMeal, May/June 2018.

(In accordance with Title 17 U.S.C. Section 107, articles in this publication are distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes.)


How the Pentagon is preparing for the coming drone wars

More than a decade after the improvised explosive device became the scourge of the wars in Iraq and Afghanistan, the Pentagon is battling another relatively rudimentary device that threatens to wreak havoc on American troops: the drone. Largely a pre-occupation of hobbyists and experimenting companies, the vehicles are beginning to become a menace on the battlefield, where their benign commercial capabilities have been transformed into lethal weapons and intelligence tools.

Instead of delivering packages, some have been configured to drop explosives. Instead of inspecting telecommunications towers, others train their cameras to monitor troops and pick targets. Instead of spraying crops, they could spread toxic gas, commanders worry. Military strategists envision the day when they will be deployed in robot armies capable of swarming defenses in kamikaze raids.

Stopping the drones has become a challenge for the Pentagon and its allies. The unmanned aerial vehicles, as they are known, can range from the size of an insect to a shoe box to a large fixed-wing aircraft. Although they have not been anywhere near as deadly as IEDs, drones could become more lethal as technology improves, military officials fear.

In response, the Pentagon is attacking what it sees as a potentially major threat, working to develop lasers and micro-waves to blast drones from the sky. The range of their use is “up to the creativity of the enemy,” said Lt. Gen. Michael Shields, director of a Pentagon agency called the Joint Improvised-Threat Defeat Organization (JIDO), which is focusing on countering the drone threat. The office opened in 2006 initially to combat IEDs, but as the enemy evolved, so too has JIDO, which now handles all sorts of “improvised threats,” such as vehicle-borne IEDs, suicide bombers, booby traps and, now, drones. “It was a natural progression for us,” Shields said. “Right now, JIDO is focused on nonstate-actor use of small drones, but there are certainly other capabilities that are out there — larger, faster and so forth.”

Some soldiers already carry specially outfitted “anti-drone” rifles that, instead of firing bullets, use pulses across radio frequencies that interfere with the vehicles’ controls. France and other countries have trained eagles and other birds of prey to attack enemy drones.

“There is definitely a sense of urgency,” said Luis Hernandez, a senior staff member at BAE Systems, which recently participated in the Hard Kill Challenge, a Pentagon-sponsored anti-drone competition. “We don’t want this to become another issue like the roadside bombs, the IEDs. Let’s attack this now.”

At the annual Association of the U.S. Army conference in Washington recently, anti-drone technology was on display on the floor of the convention center. Nammo, a Norwegian company, showcased a drone that had been blown apart by an “air burst” round — programmed to explode as it reaches its target.

Raytheon is taking a different approach, mounting a high- energy laser weapon on top of a militarized dune buggy that it says can be used to knock drones out of the sky. The company also has developed what it calls Phaser, a high-powered microwave blast that scrambles a drone’s avionics.

Lockheed Martin has a laser it calls Athena that is capable of frying the tail off a fixed-wing drone. And at an Army exhibit, officials showed a small quadcopter with what looked like a small bullet hole in it that was caused by a laser, not a gun.

While IEDs are cobbled together from mortar and artillery shells found in Afghanistan after decades of war, drones are easily accessible. A search on Amazon.com for “quadcopter” yields more than 80,000 results. And as the technology improves, the drones are becoming more capable.

“They’re used by basically everyone, and we’re seeing terrorists put them to use,” said Paul Scharre, director of the technology and national security program at the Center for a New American Security. “Think of them as flying IEDs. And while you haven’t seen casualties on the scale of IEDs, IEDs didn’t start out as that lethal, either. In 2003, they were largely a nuisance.”

That year, IEDs killed three U.S. service members. In 2010, they were responsible for 368 deaths, more than 60 percent of that year’s fatalities in the global war on terrorism, according to iCasualties.org.

Officials are not just concerned about the use of drones on the battlefield. Prisons have seen the vehicles buzz over fences to smuggle in contraband. In 2015, a quadcopter flew by the Secret Service to crash on the White House grounds. And airports worry about drones interfering with planes. On October 12 in Quebec City, a drone hit a small airplane in what officials said was the first such collision in North America. The plane, which was carrying six passengers, landed safely, but the incident raised alarms in Canada and the United States.

The United States has been using military drones to great effect for years, and the military’s dependence on them is only growing. Scharre thinks that soon they’ll be used to help troops peer around corners and search buildings for weapons or explosives, so that they become a unit’s “eyes and ears. It sounds like science fiction, but it doesn’t rely on any fundamental technology advances we don’t have today.”

– edited from The Washington Post, November 24, 2017
PeaceMeal, March/April 2018

(In accordance with Title 17 U.S.C. Section 107, articles in this publication are distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes.)