In 1942, Isaac Asimov, sci-fi author and professor introduced the three laws of robotics.
- The First Law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
- The Second Law outlines that a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- He later added a Fourth Law, also called the Zeroeth Law; a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Yet how would these rules come into play with ethical dilemmas and self driving cars?
There is a way to hardwire into the new self-driving cars whether or not to sacrifice the driver and save civilians. But that means it can go the other way to by putting the cars occupants interest about others. So the question is–which school of though should the car follow? Which human life is valued more? It’s a trolley problem in disguise.
A utilitarian decision would always nudge automation toward the better societal outcome and save the most lives possible.
Yet the root of the problem comes to my mind. Human error is the root cause of thousands of traffic accidents a year, so if you make all cars automatic–then would there still be accidents?
The trolley problem also isn’t a perfect way to answer if self-driving cars should be a thing. There are more social problems at hand, “the value of a massive investment in autonomous cars rather than in public transport; how safe a driverless car should be before it is allowed to navigate the world (and what tools should be used to determine this); and the potential effects of autonomous vehicles on congestion, the environment or employment,” (Pasquale, 2016). I think we need better public transportation before we consider better ‘private’ transportation.
One comment from the article brings up some good points. If the car has enough time to select which outcome is best and go through all the algorithms (regardless of how fast the technology is) then it should have enough time to avoid the crash all together. If there comes a crash that the vehicle cannot avoid, then no ethical decision can be or should be made. We are seeming to problematize an unproblematic decision.
Going back to utilitarianists, they would say to sacrifice the driver to save multiple lives. Yet I wonder what they would say if it was to sacrifice the driver or kill one pedestrian. How would the car’s algorithms come into play then? Would it know it kill the driver over a little girl, a murder, or grandma? I don’t think the cars have that capability. So again the trolley problem isn’t always the best solution.