As we continue to make huge advances in robotics research, researchers are worried about the lack of an ethical framework… but they’re not necessarily seeking to use the laws imagined by Asimov 70 years ago.

On a clear, blue morning, a self-driving car makes its way down the road, calmly and quickly. Suddenly, a group of children runs onto the road chasing after a ball, far from the pedestrian crossing, while another self-driving vehicle transporting an elderly couple speeds down the road from the opposite direction. For three or four seconds, time stops. Who will the self-driving car choose to hit? Faced with this dilemma, the system is close to overheating.

This is not just one more description of an accident involving a self-driving car, as occurred a few months after Uber unveiled its fleet. This a scenario from Moral Machine – a bit of a morbid online game designed by MIT researchers to illustrate the difficult ethical decisions that autonomous machine manufacturers are already facing.

With advances in artificial intelligence (AI) and robotics research, will all robots become autonomous, like cars? Do we risk them turning against us, intentionally like in Terminator, or unintentionally like in I, Robot? Should we come up with rules to better control them?

Leaning over a PR2 robot – a dumpy-looking but nice enough machine meant to assist researchers – Raja Chatila, the director of the Institute of Intelligent Systems and Robotics (ISIR) at the Sorbonne, shrugs for an answer. It seems obvious to him. “If there’s a robot near you, it had best have been designed to not put your life in danger by its actions or by the powers that it can exercise, and also, it shouldn’t harm you, morally speaking,” he says.

For nearly 20 years, this 66-year-old researcher has been working on ethics and human-robot interaction. In France, he’s worked for CERNA (Commission for Reflection on the Ethics of Research in Digital Science and Technology), and internationally, as the president of the Global Initiative on Ethics of Autonomous and Intelligent Systems of the IEEE (Institute of Electrical and Electronics Engineers), where he advocates a more ethical conception of robots and autonomous systems that prioritizes the protection of human values.

Vicious cycles

Robot manufacturers are creating and producing faster than ever, but their creations are evolving without real constraints, in the middle of a legislative and ethical no man’s land. “Robotics and AI technology diffuse rapidly, they’re found in many systems and are growing without any standard or certification. All other technologies, when they were deployed, went through standardization and certification phases … but there is nothing!” says Raja Chatila. For this pioneer of robotics in France, it is therefore “urgent to establish an ethical framework” for robots.

Most science fiction fans will probably wonder: why not simply use Isaac Asimov’s three laws? In 1942, the American writer of Russian origin imagined three “laws of robotics” for his book Runaround to limit the freedom of “conscious” machines:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

This set of rules has been repeated time and again, in the news and in Asimov novels. It’s also been cited by many science fiction writers and filmmakers bent on illustrating its flaws. One example is in Runaround, when a robot summoned to help a human being instead ends up turning round and round in circles because it’s programed to not put its own self in danger. To many people, these three laws remain a main point of reference due to their apparent logic.

“We must not confuse what Asimov imagined in 1942 with reality. The ethical principles on which he relied are quite consistent. But their feasibility is less obvious, because the robots currently manufactured are not nearly as smart as those of science fiction novels,” says Raja Chatila. “In order not to injure a human, a robot needs to be able to foresee all the consequences of its actions or its inaction, which is technically impossible. Finally, why should a machine absolutely protect its own existence ?” asks the researcher.

Not applicable

Could the three “laws of robotics” be too vague, or worse, totally unsuitable to modern robotics? At INRIA in Nancy, sitting in an office filled with small spider-shaped robots, AI researcher Jean-Baptiste Mouret shakes his head. “Most robotics researchers have read Asimov. But what we do not necessarily realize is that these laws are in fact completely inapplicable,” he says.

For three years, along with a small team, he has been designing “Resibots” – “resilient” robots that can keep walking even with a broken leg, whose objective is to move as quickly as possible (ideally, independently) despite possible obstacles. But it’s impossible to integrate Asimov’s laws into their code. “Imagine deploying the bots on an area struck by an earthquake, with wounded people… If a bot broke his leg, right now, there would be no way for him to know he were walking on a human being while he attempted to walk on one leg. He doesn’t have common sense,” the researcher explains.

To be able to apply Asimov’s laws, robots need to understand the world around them. “That was absolutely not the case, for example, with Uber’s self-driving car, which hit a cyclist in Arizona in March 2018. It ignored the human who was crossing the road, because since it doesn’t have common sense, it just does what the code tells it to do … however, its designers had wrongly set the system to detect potential ‘false positives’ – which avoids unnecessary braking in the face of ‘small’ obstacles,” explains Jean-Baptiste Mouret. “In fact, driverless vehicles only have a partial, incomplete understanding of the world. And it’s the same with all other types of robots,” he adds.

Moreover, even a harmless “companion robot” is always likely to “hurt” someone morally, or to be manipulated by its owner, “because it is very difficult for them to consider the context, or to understand human feelings.” The scientist concludes that, “it’ll take a while before machines are smarter than us.”

If transposing Asimov’s laws onto modern robotics seems difficult, would it not be better to tackle the problem at the source, by creating an ethical framework for the humans making the robots? “For self-driving cars, it is the design choices, at base, that count. For example, a manufacturer could make an ethical choice by having these vehicles drive very slowly to avoid accidents, rather than facing an impossible dilemma. But no one will want to do that,” Raja Chatila says while watching PR2’s dangling arms.

Killer robots

If there is a sector in which Asimov’s laws are likely to be trampled on from the start, it’s probably the so-called “lethal autonomous weapon” (LAWs), better known as “killer robots.” Is it even morally desirable to create robots capable of selecting and “eliminating” targets without human intervention? How would we avoid engaging in yet another arms race? Korean researchers are nevertheless on the brink of creating these machines. Meanwhile, Google is helping the US military improve drone strike targeting with AI.

Jean-Gabriel Ganascia is a researcher at LIP6, the Sorbonne’s computer lab, as well as chair of the CNRS Ethics Committee and a member of CERNA. He’s currently conducting AI research at the University of Chicago. He doesn’t mince his words: “Asimov’s laws were much too crude: opposing robots and men, and imagining that a machine can have a conscience. You have to imagine something else.”

Returning to the killer robots example, the bald-headed researcher references American roboticist Ronald Arkin’s 2011 proposal. “He suggested that robot soldiers could be more ethical than humans, and that it would be necessary to replace soldiers with machines and implant “just war” laws in them,” he says with a tone that sounds half-amused, half-frightened.

Of course, the idea of ​​a robot “killing wisely” is hair raising, but beyond that, the rules of a “just war” imply that it would be able to distinguish a civilian from a soldier, as well as use a correct amount of force in his attack. But such ethical rules are impossible to translate into algorithms. These are abstract concepts that cannot be translated into concrete and objective criteria.

LAWs

Regarding “LAWs,” even if they don’t exist yet, “it will be humans’ responsibility to respect an ethical code – by refusing to manufacture machines capable of killing – rather than the robots’, which aren’t aware of what they’re being used for,” says Raja Chatila. “Robots are men’s assistants, the responsibility lies not with the machines, but with those who make them,” he insists.

Several dozens of researchers from around the world at CERNA and CIEEE are currently developing recommendations for researchers, engineers and manufacturers regarding “human rights, well-being and its development, transparency, accountability, and risk reduction.”

Closely following these reflections, Jean-Baptiste Mouret confirms that “as soon as we write a program, it is a law, so we must therefore include universal ethical principles in the code … not inspired by Asimov, but determined by the people who develop and distribute robots. What kinds of decisions should a future robot be allowed to make, given its potential to be dangerous?”

Mouret can picture “basic rules that would be put in place from the design stage. For example, for a robot to be put in the presence of a child, its engine’s power should be limited, it shouldn’t be too heavy, or it should be padded with foam. Basically, it should only be put on the market after having met certain standards and been certified.”

“The human beings who design the systems must follow rules,” says Raja Chatila. “For example, he or she must respect people’s privacy (by designing systems that do not make public data private), not discriminate, not develop a system that can be diverted, give the user the opportunity to regain control of the machine, and prevent the robot from influencing a human as much as possible,” he says.

One of his colleagues at IEEE, the Australian researcher Andra Keay, sums up the work of the “Initiative for the Ethics of Autonomous Systems” through five alternative “robotics laws”:

  1. Robots should not be used as weapons.
  2. Robots should obey the law, especially those concerning the protection of individual lives.
  3. Robots are products: as such, they must be safe, reliable and provide an accurate picture of their capacities.
  4. Robots are manufactured objects: their capacity to mimic actions and emotions should not be used to trick more vulnerable users
  5. A robot’s responsible party should be known

Towards international standards?

Finally, how do provide a framework for a sector that has none and that is dominated by private companies, such as Softbank Robotics? “We musn’t forget that robots are, above all else, products, and that their manufacturers are focused on making a profit,” Kevin Morin, a researcher at the New Digital Environments and Cultural Intermediation Laboratory (NENIC LAB) in Montreal, told Radio Canada’s “La Sphère.”

“Faced with this risk, we need international standards, just like we have for cars, otherwise it will not work,” says Raja Chatila. The problem is that not all countries in the world have the same ethical values. “Regarding these issues, countries’ sensitivities are extremely different. Asians, for example, do not have the same concerns regarding privacy we do, and it does not bother them at all that take in all their personal data for others to use, ” Jean-Gabriel Ganascia says. “It’s difficult to create universal rules,” he concludes.

Back at ISIR in Paris, Raja Chatila tries to stay optimistic. “We can more or less adapt standards in some countries and keep general principles, but it is essential to create standards,” he says with a smile. “There is no reason we can’t do it. It’s in manufacturers’ interests to respect standards if they want to be able to sell their robots anywhere in the world. Moreover, it is also possible to legislate on the international level. Facebook ended up following the European GDPR law.”

Translated by Meaghan Beatley