When Forced To Choose, Who Will Self-Driving Cars Decide To Kill?
Estimated reading time: 6 minute(s)
This is a very interesting question. When faced with a scenario where any choice the car makes results in death, who would the car kill?
It reminds me a lot of iRobot, imagine that we’ve come so far.
Researchers from the Toulouse School of Economics made a survey to see what decision people would make.
The scenario was that the car would either kill 10 people and save the driver, or swerve and kill the driver to save the group. The survey was answered by 913 people from all around the world.
They found that more than 75 percent would kill the ‘driver’ to save 10 people, and around 50 percent supported self-sacrifice when saving just one person.
One scenario randomized the amount of people that would be killed if the driver did not swerve and asked if the car should sacrifice the passenger or bystanders.
The second version tested how people would program cars themselves. Either always sacrifice the passenger, always protect the passenger, or random, and asked to rate the morality of each.
The third group were read a story where ten people were saved because the car swerved, killing the passenger. They were asked to imagine themselves as the passenger, and then a bystander, and assess the morality on a slider.
The majority, as one might expect, supported sacrificing the few for the many. But like the scenario in iRobot, is that always the best choice?
The main character Spooner was in a car accident. A robot saw it and went to save any life it could. There was a little girl in the other car. He told the robot to save her, but the robot calculated her chances of survival and it was a lower percentage than his. The robot saved Spooner.
Studies have shown that driverless cars can reduce traffic mortality up to 90%. But the question remains, what happens to the remaining 10%?
On another note, Volvo has decided that it will take full responsibility when one of their driverless cars are in an accident.
See the study here