BP #8

Link to article: http://www.universityaffairs.ca/features/feature-article/i-robot-need-ethical-guidance/

Today more and more things are becoming automated to fit the need of our progressing society. Having to incorporate more automated systems means creating robots to do more jobs that were normally done by humans. Such a trade off comes with many pros and cons, both practical as well as ethical. A pros, for a robot doing the job could be that it does it faster with more consistent quality and fewer mistakes. On the other hand a con for the same situation could be that it is more expensive to have a machine do the work. In addition, another con would be, what if the job the robot would be doing could endanger more human lives than if an actual human was doing it. What if there was a power outage and the machine lost power and then killed a couple workers? What if the same thing occurred but a with experienced machine operator behind the controls? Arguably the operator have been able to put on a manual emergency brake or at the very least would have been able to communicate in some way to the other workers to get out of harms way.

All robotic systems that are put in place by humans are governed by three laws of robotics. The three laws are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With the the three laws of robotics being stated, brings up the topic of automated cars driving on the road and what potential complications that could arise. Many engineers designing and programming the robotic systems behind the automated cars say they are running into more and more ethical problems. Such problems are brought to light with the ideas of Utilitarianism. Utilitarianism is an ethical theory that states that the best action is the one that maximizes utility. “Utility” is defined in various ways, usually in terms of the well-being of sentient entities, such as human beings and other animals. Having said that the design engineers are running to many problems that deal with human utility when designing how the artificial intelligence will react in situations of distress. A hypothetical situation that worries many engineers and helped them realized that they need more than the three laws is: what if an automated car is driving its passengers, assuming it abides by the three laws of robotics, and suddenly has to slam on the brakes to refrain its passengers from being injured in a collision but the consequence is this will cause a massive pile-up in doing so causing more harm to more individuals, which is option is most favorable? How about if the automated car had to swerve around another car to avoid a collision but risks hitting a child that is playing close by, which choice is better in the end for each part? How were the ethics behind that decision decided upon? Hypotheticals such as these have shed light into how the ethics in robotics needs to be as progressive as the pace at which technology is in our world today.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s