The Tesla frenzy took a hit when the company announced that one of its Autopilot cars had killed a man driving his car in Mountain View, California on March 23. A similar incident occurred in 2016, when American Joshua Brown hit a semi-trailer. For Elon Musk, the head of the company, the media’s focus on these events is unjust, even dangerous, because it turns public opinion against an innovation that, according to him, will save hundreds of thousands of lives each year. So what is it really?

Autopilot

Autopilot, which Elon Musk defends tooth and nail, is technically different from an autonomous car. A car equipped with Autopilot remains under surveillance of a driver and is only considered conditionally autonomous. Autopilot learns its environment and the risks of certain driving conditions, and it can adjust to them accordingly, but it still requires the driver to take the wheel in case of problem – or to let it go. It’s an inferior degree of autonomy, then, and the driver still needs to keep their hands on the wheel.

Credits: Tesla

According to Ars Technica, nothing empirically proves that Autopilot is safer than human driving. The only study on it, commissioned by Tesla at NHTSA, the American government agency responsible for road safety, reports that accident risk is lower in Tesla vehicles equipped with Autopilot, but it doesn’t attribute this to the system itself. Several contributing factors make Tesla safer than the majority of other cars.

On its website, the company defends itself: “In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware.” But this claim doesn’t prove causality. The numbers are applied to luxury cars, which, according to a 2015 study by IIHS, are involved in far few deadly accidents, simply because they perform better.

Moreover, Autopilot’s benefits have never been isolated from advances in other technologies that are integrated into Tesla cars, such as automatic emergency braking and anti-collision systems. Yet IIHS’data show that combining these two devices has reduced damage claims to other vehicles by 13% in the automobile industry generally, as well as reducing injury claims among passengers of other vehicles by 21%. And this is independent of the Autopilot system itself, which allows the driver to let go of the wheel.

An accident accelerator?

One of the main critiques against Autopilot is the danger of semi-autonomy currently created by the Autopilot system. By relieving the driver’s attention without him really letting go, it could cause new accidents due to inattention. In the case of the March 23 accident, Tesla recognized that the Autopilot system was activated during the accident, but it also clarified that the driver had been warned of imminent danger by several signals and had failed to react. But that doesn’t explain why the car was about to crash into a huge guard rail in the first place.

Credits: Tesla

Keeping the driver engaged with these more effective control devices can be a better way to combine human abilities with recent progress, which without a doubt have real security value. That was the choice made by the Cadillac Super Cruise, which comes equipped with a camera that monitors the driver’s attention: pointed at the driver’s eyes, it watches to make sure he’s not too distracted, and it warns him through different signals, especially by vibrating the driver’s seat. Without these types of controls, the semi-Autopilot would carry even more risks.

From Elon Musk’s perspective, the tragedy isn’t enough to discredit the potential future of autonomous and Autopilot technologies. By using constantly improving machine learning technologies, they’re still safer on a large scale than human drivers. As the Arizona accident indicates, they do pose other problems, particularly with regard to responsibility, and they aren’t infallible – their builders have never claimed zero risk.

Mistrust

Faced with these uncertainties, there’s a need for more data and additional studies to define where Autopilot will be effective to stop a significant proportion of accidents. Until now, Tesla has been reticent to publish its results, according to Ars Technica, which prevents a convincing assessment of the real potential and risks of this innovation.

However, with security at the heart of arguments surrounding Autopilot and self-driving cars, winning the confidence of users is essential if you want to revolutionize a mode of transport as ubiquitous as the car. A study conducted by the California Department of Motor Vehicles showed that a third of the incidents attributed to self-driving cars in the state are the result of malicious acts and of vandalism: in this region where cars are legion, the arrival of a newcomer is met with suspicion.

Credits: Tesla

These reactions might be unjust. It’s relevant to keep in mind that airplanes, as well as some metro lines, are already largely piloted automatically without passenger complaint. However, reassuring future users will be necessary for the large-scale spread of these kinds of systems. Because when that time arrives, accidents like the one in Mountain View or in Arizona are going to happen. No offense to Elon Musk, but without real proof that his machines are an empirical improvement over human drivers, his toys will probably remain the targets of discontent Californians.