By Jeff Sanford
Birmingham, Alabama — June 26, 2015 — A philosophical question around self-driving cars is increasingly being discussed by professional bioethicists and it’s a disturbing one. If a self-driving car has two choices—say, hit a school bus full of small children or drive you and your car off a cliff to save the children—what should the car be programmed to do?
Should your self-driving car be programmed to kill you?
This is, indeed, a strange and disturbing brave new world we inhabit. But the questions around self-driving cars are questions bioethicists have been discussing of late.
A story recently posted to the website of the University of Alabama at Birmingham recently quoted some of the schools ethicists on this question. This question could be a practical one sooner rather than later.
Google’s self-driving cars have now driven more than 1.7 million miles. Volvo claims it will offer a self-driving vehicle by 2017. Some high-end cars already come with automatic braking system. So the debate is not as theoretical as one would first think. The computers doing the driving in the Google car are making millions of decisions each second. There could be a situation where, according to one online pundit, a self-driving car would suddenly be weighing two choices, “swerving into oncoming traffic or steering directly into a retaining wall.”
This is not a choice anyone wants to make. What should the computer be programmed to do?
Members of UAB’s Ethics and Bioethics teams (which took part in last year’s Bioethics Bowl) have spent “a great deal of time wrestling with these types of questions,” according to the story. According to the bioethics profs the answer could be, “yes … your car should be programmed to kill you.”