A question of ethics

The way we use private cars and public transportation to get from A to B will change radically in the coming years. This transformation raises issues that need to be clarified beforehand—including ethical ones. Audi legal expert Martin Siemann discusses the Ethics Commission’s report on automated driving.

Birte Mußmann (copy)

Thanks to cutting-edge technology, autonomous driving has long since left the realm of sci-fi fantasy films. In fact, the time when vehicles will make their way through traffic with no intervention from human drivers is drawing ever closer. This will revolutionize mobility as we know it. And this sea change gives rise to a number of questions that call for answers. What will this future look like? What will be permitted and required for tomorrow’s self-driving series-production cars? In summer 2017, the newly established Ethics Commission on Automated and Connected Driving at Germany’s Federal Ministry of Transport and Digital Infrastructure (BMVI) tackled these issues, bringing together 14 academics and experts from the disciplines of ethics, law and technology. The diverse group included specialists in transport, law, information science, engineering, philosophy, theology and consumer protection as well as representatives of associations and companies. At their meeting, the group defined 20 propositions designed as a basis for potential policymaking. How high-tech vehicles react in hazardous situations was a central area of focus. Legal expert Martin Siemann shares insights into the way Audi as a manufacturer is handling these issues from a legal perspective.

The Audi Magazine: Automated driving is already a reality in concept cars. However, it will probably be several years before this technology hits our streets in production models. Is this really the right time to discuss ethics?

Martin Siemann, legal expert at AUDI AG: The public debate shows that these ethical issues are what concern people. This alone confers on us the responsibility not just to talk about these questions—we also aim to provide answers.

In many countries across the globe, we are now seeing the laws changing to allow automated driving systems to be used in regular traffic in the near future. But new legislation alone won’t help much. In the event that something should happen, we will quickly realize that laws are one thing, but public acceptance is far more important. A law may protect us from legal consequences. But at the end of the day, trust in this new technology and the corresponding acceptance throughout the population are the factors that will decide whether autonomous driving succeeds as a technology, and ultimately also becomes part of normal life. That’s why we want to answer this question now. And I think the Ethics Commission has made a valuable contribution to this.

If responsibility for a vehicle no longer rests with a human being because the car is fully autonomous, what about liability in the future?

Autonomous means the driver no longer has any way to influence the car—no steering wheel or pedals at all. The “driver” thus becomes a passenger. At present, this is just a vision, and it will be years before such technology is market-ready. The issue of liability with these systems has not yet been fully established. What is clear, however, is that this scenario will shift liability further toward the manufacturers, because they have the greatest impact on the systems’ design. Legislation as we know it is always ultimately connected to a person. So attaching liability to the autonomous vehicle itself would not currently be conceivable. Down the road, however, with robots increasingly moving into our daily lives, we may have to start thinking along these lines. But in the end, this issue will always revolve around the manufacturer.

From the legal standpoint, what is the status quo in the case of an accident involving a car equipped with automated driving systems, where the driver does not keep a constant watch on the road yet must potentially be able to take control again at any moment? Let’s assume that the accident occurred in autonomous mode.

As long as we are in the realm of automated systems, there will not—and indeed need not—be many changes in the liability question. In the event of an accident in automated driving mode, the vehicle owner’s liability insurance will be the first thing to take effect. After that comes the second step, namely figuring out whether the driver or the system caused the accident. If the automated system was to blame, the carmaker will be liable for the damage caused. As far as that goes, it’s the same way it works now.

Information: Defined core statements of the BMVI

Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).

 


Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.

 


In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.

 


It must be documented and stored who is driving (to resolve possible issues of liability, among other things).

 


Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).