BP#7 Self-Driving Cars, The Trolley Problem, and Rules of Robotics

Image result for self driving cars and the trolley problem

In 1942, Isaac Asimov, sci-fi author and professor introduced the three laws of robotics.

  • The First Law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law outlines that a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • He later added a Fourth Law, also called the Zeroeth Law; a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Yet how would these rules come into play with ethical dilemmas and self driving cars?

There is a way to hardwire into the new self-driving cars whether or not to sacrifice the driver and save civilians. But that means it can go the other way to by putting the cars occupants interest about others. So the question is–which school of though should the car follow? Which human life is valued more? It’s a trolley problem in disguise.

A utilitarian decision would always nudge automation toward the better societal outcome and save the most lives possible.

Yet the root of the problem comes to my mind. Human error is the root cause of thousands of traffic accidents a year, so if you make all cars automatic–then would there still be accidents?

The trolley problem also isn’t a perfect way to answer if self-driving cars should be a thing. There are more social problems at hand, “the value of a massive investment in autonomous cars rather than in public transport; how safe a driverless car should be before it is allowed to navigate the world (and what tools should be used to determine this); and the potential effects of autonomous vehicles on congestion, the environment or employment,” (Pasquale, 2016). I think we need better public transportation before we consider better ‘private’ transportation.

One comment from the article brings up some good points. If the car has enough time to select which outcome is best and go through all the algorithms (regardless of how fast the technology is) then it should have enough time to avoid the crash all together. If there comes a crash that the vehicle cannot avoid, then no ethical decision can be or should be made. We are seeming to problematize an unproblematic decision.

Going back to utilitarianists, they would say to sacrifice the driver to save multiple lives. Yet I wonder what they would say if it was to sacrifice the driver or kill one pedestrian. How would the car’s algorithms come into play then? Would it know it kill the driver over a little girl, a murder, or grandma? I don’t think the cars have that capability. So again the trolley problem isn’t always the best solution.

http://www.slate.com/articles/technology/future_tense/2016/10/self_driving_cars_shouldn_t_have_to_choose_who_to_protect_in_a_crash.html#lf_comment=594426495

https://blog.cjponyparts.com/2016/01/ethical-dilemma-self-driving-cars-robotics/

2815478-helpmewiththisguysidontknowwhichto_fda7ad_5310057

 

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s