By Léna Charbonnière, Camille Déchavanne, Lauriane Lauffenburger, Céleste Lemarchand
Is the current society ready to welcome cars that drive themselves?
Big cities such as Paris are surrounded by electric scooters driving all around. In Lyon, a subway line is working without any driver. And now in Australia, a merchandise train is fully self-directing. Our society is evolving, and the new state-of-the-art prototype will be autonomous cars.
The idea behind the autonomous car is quite simple: to create a vehicle that is self-governing and capable of moving safely with little or no human intervention. Self-driving cars combine a broad range of sensors to perceive their surroundings (radar, lidar, sonar, GPS, odometry or, inertial measurement units) and to interpret the signals received. Basically, you just have to sit and let the car do the work. Sounds great, doesn’t it?
But it is actually a lot more complex than it seems. Indeed, driving is one of the trickiest activities a person does daily. There are rules to apply, directions and paths to follow, priorities to respect and everything is different for each situation. Moreover, we are not alone on the road and have to deal with other drivers! Our behavior depends also on theirs, and no need to say that judgment calls made on a daily basis are difficult to encode in an algorithm.
When it comes to self-driving cars a few – but very famous – names come in mind: Tesla, Waymo (Google), General motor, Toyota, Honda… These companies are promoting a brand new and modern way of driving and above all a safe one: “we’re building the world’s most experienced driver” (Waymo team).
However, the zero-risk will never be eliminated. Knowing this, an important question must be thought upon: the notion of responsibility. Mr. HENRY, Industrial Designer working for a manufacturing company of automobile seats, noticed that “most people who had questions and raised fear about using this new product were over 50”. Indeed, some people reveal to be supportive of this innovation, but others are very conscious of the risks involved. Still, the question of responsibility resonates in everyone’s mind.
Let the car do your work
For now, the response seems to be according to the car’s level of autonomy. There are five different levels. From only speed control to gradually a total autonomous driving: the user progressively leaves control to his vehicle as the level increases. Most people would think that the Tesla Autopilot is a Level 5 with no human-assistance required. However, it is currently considered at a Level 2 to 3 which means that the car manages the vehicle: acceleration, change of direction, park the car, as stressed Mr. HENRY. The driver has to be present while driving the Tesla and must be ready to take over control. Thus, it can’t be considered a true self-driving vehicle. Each time there has been a Tesla incident involving the Autopilot mode the reaction of the company was the same: rejecting the fault on the human driver. Thus, even though they market the Tesla vehicles as very futuristic and autonomous, there are technically not driverless cars and the user is responsible.
Moreover, Mr. HENRY points out that “from Level 3 to 5 there is no more legislation, so we don’t know who is responsible”. It would make no sense to blame only the user when the AI is more and more involved. But who? The company which sold the AI system, the manufacturers, the engineers…? Right now, we don’t know but according to Mr. HENRY, “for Level 5 auto manufacturers will take one more level of responsibility”.
Can Ethics and Technology get along?
We can wonder why companies are trying their best to develop a new type of vehicle that revolutionizes driving but seems so complicated! Indeed, how can a vehicle make ethical judgments on its own? How to give a moral compass to a robot seems to be one of the greatest challenges of our time. There are so many ethical dilemmas that could happen unpredictably. For instance, how would an autonomous car react when a child is suddenly crossing at an intersection? We asked Mrs. DEVILLIER, a law professor at Grenoble Ecole de Management and member of the Horizon 2020 Commission Expert Group, to advise on specific ethical issues raised by driverless mobility. According to her, there are only two options: “either the car brakes and avoids the child but risks injuring the passengers of the car” or “the car protects the passenger and injures the child”. So, ethical and morals frameworks must be determined and defined before putting these cars in circulation. Communication and transparency reveal to be essential for introducing them to the public and avoid confusion or panic.
However, will it be enough for people to give their trust? Are we willing to give up control to a machine? For some individuals, it would be a relief. For commuters that get stuck in traffic jams, it would be very convenient. With a self-driving car, they could pursue other activities and optimize their time. Moreover, the AI system could be safer and more reliable than humans thanks to technology. As a result, it attracts the attention of many actors. This explains Gartner’s encouraging forecast of “1 million Level 3 and above will be produced annually by 2025”. Companies are betting on removing the human error element to produce safer cars and be part of this evolution. But is society ready for this change? Will it wait for the perfect driverless car to exist or will a 10% safer car be enough in their opinion? Looking at figures, the answer should be clear: a study from RAND Corporation conducted in 2017 established that more lives would be saved if autonomous cars were only slightly safer – about 10% – than traditional cars.
Yet, self-driving cars still have many unknown risks. According to another research report from RAND Corporation (2017), it would require an enormous amount of data – a minimum of 100 million miles – to have an objective estimation of the safety of driverless cars, which would take hundreds of years. Indeed, the leader in driverless cars today, Waymo, has only just celebrated its 10 million driven miles, in majority due to the important regulations that slow down the process.
Humans or Technology at fault?
Technological innovations and digitalization have a fundamental impact on our environments raising major legal concerns. “The actual legal framework […] is completely obsolete” but some improvements are being made. “A part of the new legal framework already exists. Besides, the PACTE law (2017) already contains some laws on autonomous vehicles and in the upcoming months, there will be new rulings to clarify this” reveals Mrs. DEVILLIER.
Even if it sounds paradoxical, the Law is not the same for every single human being such as children who may be treated differently from adults. What about artificial intelligence? For some experts, there is today no need to give a legal personality to emerging digital technologies. Since harm caused by fully autonomous technologies is generally reducible to risks attributable to natural persons: new laws directed at individuals appear to be a better response than creating a new category of legal person (i.e. AI). Moreover, attributing to digital technologies any sort of legal personality may raise several ethical issues. “Robots have no legal personality, so autonomous cars do not either. Indeed, there is always a human who manages their algorithms, who makes the updates. They don’t evolve by themselves” stresses Mrs. DEVILLIER. Thus, in the eye of the law AI cannot be held accountable nor be considered a moral person holding any responsibility.
So, in case of an accident, who is responsible? Sources of harm are not always clearly targeted for the legislation. In Europe, law regimes handle such uncertainties in the case of multiple potential sources quite differently. Even if something is proven to have triggered the harm (for instance because an autonomous car collided with a tree), the real reason for it is not always equally evident. Numerous situations could be discussed: the car may have been poorly designed; it may have either misread the data or received an incorrect one; a software update done by the original producer or by some third party may have been flawed; the user may have failed to install an update which would have prevented the collision; and so and so on. But who can be charge as responsible? According to Mr. HENRY, “in unclear situations, nor the car manufacturer nor the driver will directly claim responsibility”. Finally, some people may not know how to use it. There will be a period of unknown and uncertainty, but which will be the first brand sold that will take on all the problems?
It is a very complex vehicle and today, in addition to the manufacturers’ responsibilities, there are Telecom infrastructures, motion detector companies, or software companies. Experts agree on the difficult situation, and so does Mrs. DEVILLIER: “all companies blame each other because there are lots of accident hypothesis and none of them want to be considered the one automatically designated”.
A growing need for an updated legal framework
Moreover, it is important to stress that technology is not the only questioning. In the end, no one really knows how to react in front of a driverless vehicle. Due to very few pieces of information on how the futuristic car would act, how can an individual make the right move? In this case, it seems very difficult to prove who will be in the wrong position. Especially in front of well-known car manufacturers such as Renault, for example. Fortunately, there is already some data that can reasonably be logged to reconstruct events and causal chains. Both are crucial for allocating liability in judgments. For instance, finding out through data recollection which autonomous vehicle caused the accident by not replying to a signal sent by another.
But if a vehicle is committing an offense in autonomous mode, the driver will be responsible in some cases. In a logic of common sense, the user is required to brake if the vehicle exceeds the authorized speed by mistake. Indeed, the driver has to be very attentive and ready in case a bug occurs. As we are still at the beginning, there is no total liberty since the car is not 100% autonomous. But it’s already a relief to some people especially “when arriving on the highway or a road with the same guardrails of security and continuous lines” affirms Mr. HENRY. As the highway is more conventional than country roads, the autonomous mode reveals to be more adapted and safer.
As said, the regulation will have to evolve but some rules will stay the same. Having to exceed the speed limit for emergency reasons such as “taking someone to the hospital […] will be tolerated by the police as it is the case today” reassures Mrs. DEVILLIER. Applying some ethical rules to the autonomous car drive code is comforting and humanizing. The obligation to insure your car will stay as well. Mrs. DEVILLIER affirms that “As autonomous cars have no legal personality” so assuring safety with new legal insurance will be essential and unavoidable. Finally, a new driving license will be necessary so that people “get used to taking control and drive them” in order to minimize the risk of accidents by educating new users, says Mrs. DEVILLIER.
Fortunately, France has already made some great progress upon the legal framework since “it is one of the first European countries to take active care of it” states Mrs. DEVILLIER. However, too many laws can have a restraining effect and “early adopters profiles will be the first to face the challenge” highlights Mr. HENRY. So, to keep evolving the Law will have to make it as easy and clear as possible for this futuristic car to come into our lives. All these issues have concrete policy implications today and will affect millions of lives in a few years. There is still a need for ethics to be discussed and incorporated into new laws and regulations if we want to really make autonomous vehicles a reality. Yet, will it be enough?
 Liability for Artificial Intelligence and other emerging digital technologies by Expert Group on Liability and New Technologies – European Commission
 Driving to Safety – How many miles of driving would it take to demonstrate autonomous vehicle reliability? by Nidhi Kalra and Susan M. Paddock – RAND Corporation
 The Enemy of Good – Estimation the cost of waiting for a nearly perfect automated vehicles by Nidhi Kalra and David G. Groves – RAND Corporation
 Gartner forecast analysis on autonomous vehicle: https://www.gartner.com/en/documents/3969892/forecast-analysis-autonomous-vehicle-net-additions-inter
 The Robot’s dilemma by Boer Deng: https://userweb.fct.unl.pt/~lmp/publications/online-papers/The%20Robot’s%20Dilemma.pdf
 Top 10 Strategic Technology Trends for 2020 by Gartner: https://v2.grenoble-em.com/pluginfile.php/158714/mod_resource/content/0/top_10_strategic_technology__432920.pdf
 The ethical dilemma of self-driving cars by Patrick Lin (TEDEd): https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin
 Vox – Self-driving car safety, how safe is safe enough?: https://www.vox.com/recode/2019/5/17/18564501/self-driving-car-morals-safety-tesla-waymo
 The Guardian – view on car culture: change is coming https://www.theguardian.com/commentisfree/2019/dec/26/the-guardian-view-on-car-culture-change-is-coming