Would your self driving car sacrifice you to save others?

Would you buy a car if you knew that it would make the decision, in some circumstances, to sacrifice its occupants in order to prevent a larger catastrophe? Instinctively, a driver would swerve to avoid a buggy rolling in front of the car - potentially injuring the driver. However, would you buy a car knowing that it would make those decisions for you? And who is programming the 'moral code' of the car?

These moral issues must be addressed, and legislated on, before AI vehicles are allowed anywhere near the road.


Why the contribution is important

The problem is, there is no clear answer to what the car should do if faced with a situation where there is a near certainty of loss of life, and it has to choose between driver and bystander. If the AI is programmed to save the maximum amount of lives, there are times when it would inevitably sacrifice the owner of the vehicle in order to save others. But, what if that occupant was a heart surgeon who saved multiple lives every day. In that case, by sacrificing the owner more people would die in the long run. In that case, should the car save the occupant and risk the lives of the other members of the public? Even if it made a difference from a moral point of view, how could an AI vehicle have this data? And what would stop selfish people from manipulating it to ensure their safety?

As AI vehicles will have to be commercial, they will not publicise how they are programming the cars to make these decisions. Should there be multiple models on the road, some that seek to minimise damage, even at the expense of the occupant of the car, or should all vehicles prioritise the owner simply because they bought the vehicle? Neither seems satisfactory. Perhaps there would be multiple vehicles on the market, each with their own moral code - this could be disastrous and lead to further accidents where the AI miss-predicts the movements of another AI vehicle it is about to collide with.

The benefits of AI and driverless vehicles cannot be realised unless these important moral questions are considered - problems that leading philosophers have wrestled with for generations and not solved satisfactorily.



by Karamu on June 18, 2018 at 02:18PM

Current Rating

Average rating: 0.0
Based on: 0 votes


  • Posted by OAIteam July 11, 2018 at 13:27

    Thank you for your comment. The moral issue you’ve raised is relevant to both autonomous and manned-vehicles alike, and across other areas where AI applications are being developed. Indeed, there are a number of implications of the scenario you’ve conveyed that require careful consideration.

    In March 2018, government announced the start of a three-year review by the Law Commission of England and Wales and the Scottish Law Commission to examine any legal obstacles to the widespread introduction of self-driving vehicles and highlight the need for regulatory reforms. More can be read here: https://www.gov.uk/governme[…]-for-self-driving-vehicles.
Log in or register to add comments and rate ideas