These last few months, a slew of articles have discussed the possible dangers posed by self-driving cars. Tragic accidents, such as the March 19 Tempe, Arizona incident, have made matters worse and brought up many questions regarding how these vehicles operate, and notably, how they “see” – the main cause for concern.
The “eyes” of the car
Concretely, self-driving cars operate like this: first, their trajectory is mapped out by a geolocation system (which is more precise than the classic GPS). This takes into account any obstacle or known regulation on the route. Next, various sensors located on the vehicle send information in real time to a central computer in the car. After processing all the data, the computer then modifies the trajectory to accommodate moving or previously unseen obstacles. Basically, these sensors are the car’s “eyes.” According to the newsite Jalopnik, the sensors go so far as to reconstruct the human “senses” we use while driving.
There are four main types of sensors on most self-driving cars, including Waymo and Uber cars. Let’s take a look at Waymo’s Chrysler Pacifica minivan as an example. First off, eight cameras located on the roof and sides of the car allow it to measure distances between the vehicle and its surroundings, as well as detect the presence of obstacles. These include front-facing cameras, which are used to identify colors, signs, pedestrians, cyclists and animals. In addition to front-facing cameras, some luxury cars also contain infrared cameras, which serve the same purpose, but at night.
Next, five radar sensors located in the front and the back of the vehicle allow it to measure distances between itself and a variety of obstacles. This is an additional measure to avoid collisions.
Additionally, self-driving cars include ultrasonic sensors to detect obstacles close by, just a few meters away.
Finally, three LiDAR (Light Detection and Ranging) units on the roof and in the front of the vehicle emit pulses of infrared light to measure the distance between the vehicle and its surroundings. This creates a 3D map of the car’s surroundings. Basically, LiDAR is a kind of radar, except that it uses light waves.
Can a car go blind?
The March 19 accident caused by Uber’s self-driving Volvo XC90 raised a number of concerns, chiefly whether sensors can be trusted. The fact that it happened at night led many to believe the problem was the car’s “vision.”
However, a preliminary investigation conducted by the US’s National Transportation Safety Board found that that wasn’t the case. The LiDAR units and radar detected the pedestrian six seconds before the impact, proceeded to evaluate the trajectory and speed, before finally concluding that the vehicle needed to pull the emergency brake, 1.3 seconds before the impact. That’s when the first problem arose. On a manual Volvo, the emergency brake can be activated, but that function is deactivated on an automatic car. Under normal circumstances, it would have been up to the driver to brake. However, the driver hadn’t received any kind of notification, and thus didn’t pay attention.
How does Tesla work?
One thing’s for sure: Google, Uber and the others are all working to democratize self-driving cars as quickly as possible. But is there an alternative to Uber and Waymo cars? Elon Musk is proposing a LiDAR-less model, contrary to the majority of companies developing self-driving cars.
Let’s recall that Elon Musk’s Autopilot is not a self-driving car. It’s a car equipped with an automated driving system. That means it’s a semi-automatic vehicle that needs a human to take the wheel if there’s a problem. It contains ultrasonic sensors, radars and a series of cameras giving it full 360° vision, thanks to AI and neural net.
According to Musk, a LiDAR sensor is “expensive, ugly and unnecessary.” “To me, it’s just a crutch,” The Verge reported him saying last February 7. He backs his position with economic and aesthetic reasons, but also with data collected by some of his cars already in use. Musk believes his sensors, and not LiDAR, are the key to finally make a fully autonomous car.
What remains to be done?
“A self-driving car is easily deceived.” Will Elon Musk give us a reason not to believe the aforementioned sentence? Right now, a truck with an image of a bike or some other vehicle printed on its back door can easily trick a self-driving vehicle’s cameras, which don’t have a human being’s critical mind or visual sharpness. Similarly, a camera can’t interpret a pedestrian’s facial expression, or a driver’s attitude, the way a human can. But engineers at Waymo and other companies are working to solve these problems through artificial intelligence, The Verge reports.
Finally, though many questions remain, we know that self-driving cars’ “eyes” work. The brain just needs to catch up. Basically, artificial intelligence must improve performance and ensure better communication between the sensors and the commands, via machine learning. “It’s not a good thing that self-driving cars have killed people, but testing them in real life situations is a necessary step if we want to continue moving towards a safer and more promising future,” the The Next Web writes.