While the European Parliament recently adopted a resolution to prohibit them, some see killer robots as the third great revolution in the arms industry, after gunpowder and nuclear weapons. Will we be able to prevent their rise?

Autonomous and adaptable arms

In 2017, a dystopic video entitled “Slaughterbots,” produced for a “Ban lethal autonomous weapons” campaign, alerted viewers to the dangers in the development of autonomous arms, or “killer robots.” The authors imagined a conference taking place in the near future, during which an industrialist presents robots operating in complete autonomy, programmed to kill without risk or chance. What follows is a highly realistic demonstration of the potential effects of such an invention if it ended up in the hands of terrorist groups, but also of a tyrannical or genocidal government.

Although this video plays up the fear associated with the potential results of already-existing weapons technology, it zeroes in on a central problem for civil society and public power. On 12 September, in a plenary session, the European Parliament adopted a nonbinding resolution by a large majority to prohibit killer robots, or “autonomous lethal weapons.” The parliamentarians agreed that these arms were challenging the socially implicit notion that humans—and humans alone—should be empowered to kill humans. Killer robots being largely outside of human control, the parliamentarians decided that an international ban should be put in place.

To make it clearer, legislators tried to specify the term “killer robots,” including for example “missiles that can select their targets or self-learning machines with the cognitive capability to decide when, where, and who to attack.” In short, weapons using artificial intelligence to choose and attack their targets completely autonomously. In an explainer video, Agence France Press goes as far as saying that these machines represent the third great revolution in the arms industry, after gunpowder and nuclear arms.

These killer robots are therefore distinct from automatic weapons systems, as Jean-Baptiste Jeangène Vilmer, director of the Strategic Research Institute of the Ecole Militaire, explained to Numerama. Jeangène Vilmer is a specialist. “We basically need to distinguish automatic weapons, preprogrammed to carry out specific tasks in a predefined and controlled environment, from autonomous weapons, which decide if and when to carry out its tasks in a changing and unpredictable environment. Automatic weapons are deterministic, and therefore predictable. They perform every action according to a command, inevitably and regularly.

Killer robots are, above all, unpredictable, capable of taking its own initiatives. It’s worth noting, though, that these weapons, of which LCI gives recent examples, are still in development stages and are only just beginning to be showcased in weapons fairs.

Better than soldiers?

While people might be uncomfortable with industrialists working on killer robots, they definitely have advantages on and off the battlefield. At least that was the argument made in 2015 in Foreign Policy by researcher Rosa Brooks. She believes that the robots, being inherently less emotional and more predictable than humans, would be more efficient and would better comply with international law in actual operation. In other words, a lot of mistakes would be avoided if autonomous weapons systems were to take over.

On the other hand, Brooks makes the argument that humans, being moral beings, should be the only ones allowed to take life. She borrows several examples from history and relies on current crime figures to show that a robot would likely lack the moral reserve of Man. According to Rosa Brooks, the fear of killer robots is largely fantasized, attributable to science fiction stories like Terminator that turn people into “technophobes” without real foundation.

In addition to improving security and efficacy on the battlefield, killer robots have other advantages, according to their defenders. They stress the political and economic interests they can offer, as Jean-Baptiste Jeangène Vilmer explains in an article for Politique Étrangère.

The first advantage, pointed out by many industrialists, is political: one of the main reservations by political leaders when they make military decisions has to do with human cost. With killer robots, the risk of soldier death (at least, soldiers on the side of the killer robots) is virtually eliminated. On the other hand, in strategic and tactical terms, killer robots provide speed and efficiency. They don’t succumb to fatigue, they don’t lose concentration, and once engaged there’s no issue with communicating orders. Lastly, Mr. Jeangène Vilmer points out that killer robots present a special advantage for those in charge: they cost less than soldiers.

An ethical and legal issue, a security risk

However, several institutional and civil society actors worry about the development of autonomous lethal weapons. Artificial intelligence researchers and an NGO, collaborating on a campaign called Stop Killer Robots since April 2013 under the initiative of Human Rights Watch (HRW), are particularly active. HRW estimates that killer robots present a legal problem, since the question of blame and attribution for deaths, as well as inevitable errors, are fairly unresolvable. Who would be legally responsible for war crimes? The military command? Political decision makers? The industrialist who designed the weapon?

In addition, HRW released a public report in August that shows that autonomous lethal weapons are immune to one of the fundamental clauses of international humanitarian law, the Martens Clause, which stipulates that when no particular provision governs military action, the “laws of humanity” and “demands of public conscience” prevail. A moral dilemma has also emerged not with the development of killer robots, but with their engagement on the ground. Robots might be safer and more effective, but on the battlefield, not everything is programmable, and military ethic comes into play as the “fundamental frame of action,” explains Colonel Goya, editor of the blog La voie de l’épée.

In purely security-related terms, these robot killers present risks. As with nuclear arms, if nothing is done to curb the development of killer robots, a veritable autonomous arms race could take place, further complicating the implementation of future legislation. China, the United States, France and Israel, to name but a few, are already working on developing the weapons. Moreover, as the researcher Cyrille Dalmont notes in an editorial for Le Figaro “every machine, every artificial intelligence, every robot is piratable, hackable.” In addition to proliferating, the robots could fall into the wrong hands and eventually turn against their “creators.”

A political struggle

While the UN has of course taken up the issue, creating a group of governmental experts to address autonomous lethal weapons, no binding measures have been put in place yet. Headed by Amandeep Gill, this group that gathers together the 125 signatory states of the Convention on Conventional Weapons is trying to bridge different points of view of different states, experts and NGOs, thus far without much success, The Verge explains. In fact, many states oppose the ban on the weapons, such as Russia, the US, Australia, Israel and South Korea, all five of whom blocked the passing of an early September treaty to prevent their proliferation. Notably, these countries are uniquely positioned at the forefront of development– see Boston Dynamics in the US and Samsung in South Korea. As Futura Science reports, France and Germany are well on their way. They want to maintain human control on the weapons, though, and they support the weapons ban.

Going even further, others have called for a complete and total ban on autonomous lethal weapons, Libération reports. In Europe, only Austria has joined the movement, which notably includes China at its head. These countries are supported by the biggest names in tech, particularly Elon Musk, who said in 2017 that artificial intelligence presents a bigger risk to the world than North Korea, and the founders of DeepMind, the Google subsidiary dedicated to artificial intelligence. In a public appeal, they insist on the moral stakes and risks these weapons bring. But they and their supporters are disappointed by the UN’s meager progress in prohibiting killer robots, for the aforementioned reasons.

The challenge now is knowing who will win: the arms developers or their opponents? Even if killer robots provide undeniable economic, strategic and political advantages, they also pose undeniable ethical, legal and security risks. The various working groups and expert opinions seem to be trending toward limitation, or at least a framework for the potential use of these weapons, but as of yet nothing whatsoever is in stone.