Should Fully Autonomous Weapons Systems Be Banned?
The Guardium UGV is an unmanned tank used by the Israeli military to patrol the Gaza border. It is able to patrol autonomously, but its weapon systems can only be engaged by remote human operators. The Goalkeeper CIWS is a point defense cannon used on Naval ships that is able to autonomously target and fire upon rapidly approaching torpedoes. What if the autonomous target identification and firing mechanisms of the Goalkeeper were used in military robots such as the Guardian, which operate in civilian theaters?
Such fully autonomous weapon systems are the topic of considerable debate, and the UN Convention on Certain Conventional Weapons is considering a preemptive ban on such weapons, similar to bans enacted against, e.g., blinding lasers and antipersonnel mines. Such a ban would prohibit the development and deployment of such weapons, as well as any research that could have no benefit beyond the development of these weapons.
As a research ban would primarily affect the Artificial Intelligence community, many AI researchers have spoken up for or against a ban, and on January 29th, a public debate on the issue was held in Austin, Texas at the 2015 AAAI Conference, one of the field of AI’s premiere conferences.
The Debate
Arguing in favor of a comprehensive, preemptive ban on fully autonomous weapon systems was the director of the Human Rights Watch, Stephen Goose. Under Goose’s direction, the HRW has been instrumental in the banning of blinding lasers, cluster munitions, landmines, and other weapons.
Arguing against such a ban was Ron Arkin of Georgia Tech. Arkin is a researcher in robotics and robot ethics, and is known for his “Ethical Governor”, a proposed system for preventing robots from performing unethical actions (gross simplification).
The debate, moderated by AAAI President Tom Dietterich and held in front of a crowd of hundreds of AI researchers, took the form of fifteen-minute position statements, four-minute rebuttals, questions posed by audience members, and two-minute closing statements.
In this blog post, I hope to relay the gist of each debator’s arguments. As this is a sensative topic, I will add my brief commentary at the end of the post but otherwise focus on relaying the course of the debate as truthfully and fully as possible. I must also state at this point that any opinions I do communicate on this topic are mine alone and do not reflect any policy or collective opinion of the Tufts HRI Lab.
Point: Ron Arkin
First to speak was Arkin, whose presentation was entitled “Lethal Autonomous Weapon Systems and the Plight of the Noncombatant”. It is important to stress that while Arkin was arguing against an outright ban, he was not arguing in favor of autonomous lethal weapons. Arkin stated that he was against all killing, and that while he was in principle not averse to such a ban, he believed that we should not rush into the deployment of one until the issue had been thoroughly considered, and that for the time being we would be better served by a moratorium on such weapons.
Arkin’s rationale for this position stemmed from his belief that it is possible that autonomous weapon systems may in fact be able to be used for good, and save noncombatant lives. He asserted that the status quo of the noncombatant today is wholly unacceptable, and that robotics and AI can make wartime atrocities against noncombatants less likely to occur. “I am convinced”, said Arkin, “that [unmanned systems used as adjuncts to human soldiers] can perform more ethically than human soldiers are capable.”
“I am convinced that they can perform more ethically than human soldiers are capable.”
In previous talks, Arkin has used a series of sobering statistics to make this point. Here are a few, taken verbatim from his 2010 paper in the Journal of Military Ethics entitled “The Case for Ethical Autonomy in Unmanned Systems”:
- Approximately 10 percent of Soldiers and Marines report mistreating noncombatants (damaged/destroyed Iraqi property when not necessary or hit/kicked a noncombatant when not necessary).
- Only 47 percent of Soldiers and 38 percent of Marines agreed that noncombatants should be treated with dignity and respect.
- Although they reported receiving ethical training, 28 percent of Soldiers and 31 percent of Marines reported facing ethical situations in which they did not know how to respond.
Arkin did not lay out all of these statistics during the debate, but stressed that while he has great respect for our men and women in uniform, the battlefield tempo is simply outpacing human ability to make ethical and accurate decisions.
Fortunately for noncombatants, robots may be able to perform more ethically than humans, for a variety of reasons cited by Arkin:
- Robots are able to act conservatively
- Robot sensors are better equipped for battlefield observations
- Robots can be designed without emotions
- Robots would not fall victim to human “scenario fulfillment”
- Robots can integrate multi-source information far faster
- Robots can independently and objectively monitor the ethical behavior of all parties on the battlefield
Arkin believes that if it is possible to create unmanned adjuncts to human soldiers that can act more ethically than their human teammates, then the development of such adjuncts is a moral imperative, and constitutes a humanitarian effort. As adjuncts, autonomous weapons would not replace humans, but would be used alongside them; a human would always be in their control loop.
Counterpoint: Steven Goose
Before relaying Goose’s arguments, I must state that my summary of his remarks come from a nearly identical talk he gave during the “AI & Ethics” workshop conducted on the first day of the conference. I apologize if I inadvertently include any points raised during his workshop talk that did not actually come up during the debate.
While Arkin was emphatic that he was opposed to all killing, Goose was emphatic that he was not opposed to military robots, but only to fully autonomous weapon systems.
Goose asserted that while there were clearly many possible advantages to autonomous weapons, these were outweighed by the negatives, and that any possible advantages would not be lost by keeping a human in control.
Goose’s laid out several objections to what he terms “Killer Robots”:
- A belief that it is simply morally wrong to give robots decisions over life and death, and that this would constitute a “loss of humanity” in more ways than one
- The Geneva Conventions’ “Martin’s Clause” which dictates that fully autonomous weapons should comply with the dictates of public conscience, which he argues they do not
- The unlikeliness that killer robots would comply with the distinction of proportionality and other inherently subjective human judgements
- The fact that robots could accidentally circumvent the rights of combatants who are injured or who have surrendered
- The accountability gap: a robot can’t be held liable for its actions, so who is? Its manufacturer, programmer, commander, or someone else?
Finally, Goose expounded upon concerns of proliferation, the possibility that a further removal from killing would lead to more war, and the fact that “killer bots” could be used as a tool of repression for autocrats.
Rebuttal: Arkin
Arkin rebutted with the following points, among others:
- The AAAI Should play a leading role in defining what is meant by “autonomy” and “meaningful human control”, not bureaucrats
- Neither tanks nor robots should be used in law enforcement
- Enforcement of a ban is infeasible
- If we can save human lives with this technology, it’s irresponsible to ignore it.
Note that some points in this list rebutted not statements made by Goose, but possible misconstruals of Arkin’s beliefs.
Rebuttal: Goose
Goose rebutted that a moratorium is legally tricky, and that a moratorium (as imposed by the DoD) would not inhibit research in any way. He also dismissed the idea that “killer robots” could conceivably be limited to constrained circumstances.
Final Remarks
After several questions from the audience (some more relevant than others), closing remarks were made.
Arkin closed with three points.
- If your research is of any value, it will eventually be misused
- He fully supports a moratorium
- We have a responsibility to try to save human lives.
Goose’s closing statements reiterated that some weapons simply shouldn’t be used.
My Thoughts
Ultimately, my thoughts on this issue come down to one question, also posed by an audience member at the debate: while I agree that adding ethical safeguards to robots could save human lives, how much of the ideal espoused by Arkin requires the ability for robots to have the power of life and death? The ability to “refuse to shoot” in unethical circumstances would be a valuable addition to any weapon system, remotely controlled or otherwise. This ability, however, would be unlikely to be affected by a ban on autonomous weapon systems. Yes, robots will also be able to autonomously perform targeting and firing better than humans, but does this ability (the sole ability, as far as I can see, which would be affected by the ban) help in the creation of ethical agents in any way? I’m not sure it does.
Leave a Comment