An interesting article from Fortune poses a very stark question: can you ethically crash a self-driving car?

It’s a difficult question to answer because the promise of self-driving cars is too enticing to think a negative thought about it. Without the human element involved in driving, it is argued, there will be a drastic drop in the number of motor vehicles accidents. But what will happen when a self-driving car is put into a situation out on the road where an accident is inevitable? How will the autonomous vehicle respond? And what would the rule of law say about such a crash?

Imagine this scenario: a self-driving car is pinned on all sides by traffic. A car to its left and right, and a vehicle right behind it. All are going the same speeds and traveling uniformly. The car in front of the self-driving car is towing cargo. That cargo unexpectedly falls onto the road. What does the self-driving car do?

A crash is inevitable in this case, and the legality behind handling the aftermath of the inevitable crash is uncertain. For example, did the people who programmed the self-driving car anticipate this situation? And if they did, does that mean they made a premeditated decision regarding the collision? And if that’s true, then what are the legal consequences? 

Self-driving cars will surely bring a new dawn of safety, but they will also bring a new dawn of legal questions. How we answer them and how we address them will be critical.

Source: Fortune, “Can You Crash An Autonomous Car Ethically?,” Andrew Nusca, Nov. 16, 2016