BP #12 – Artificial Intelligence According to the Perspective of Utilitarianism

Article

Last week, I took a look at the research center being developed at Carnegie Mellon University with the primary focus being the ethics of artificial intelligence. The quickly growing rate of concern regarding the ethics behind artificial intelligent technologies has caused the formation of the K & L Gates Endowment for Ethics and Computational Technologies. As it is now my primary focus for my final group project, I decided to return to this article for another discussion.

A series of workshops have discussed the potential impacts artificial intelligence may have on the world, with some of the best and worst outcomes being brought up and analyzed in depth.

However, while last week I turned to the ethical perspective of the German philosopher Immanuel Kant and his categorical imperative, this week I now turn to Utilitarianism, specifically Mill’s discussions, for an alternative viewpoint on A.I. technologies.

Whether it be autonomous vehicles or autonomous drones, both military and commercial use, artificial technology merits thorough discussion and consideration. The question of how much freedom an intelligent drone or autonomous car should have has been raised. Another concern regarding limiting said technologies has been voiced – saying that ultimately ‘blacklisting’ certain commands or functions and restricting specific functions would defeat the purpose of creating an intelligent machine or artificial ‘being’.

Something such as autonomous cars seems fairly simple – a technology that could reduce accidents and traffic related incidents caused by human error, as well as distracted driving. It is worth nothing that traffic-related deaths have been on the rise in recent years, many people having placed the primary blame on texting, smartphones, and apps in general. An intelligent car could potentially reduce incidents related to actions involving these factors. The argument gets far more troublesome, however, when one takes a look at intelligent drones built specifically for military use – more specifically, giving a machine the power and responsibility to take life, rather than improve it.

I previously discussed this precise problem in my prior blogpost, BP #8 – ‘The Terminator Conundrum’.  Essentially, many have criticized building what they call such a threat to humanity. An intelligent drone lacks the humanity and morals of most humans – to build a machine that is not only intelligent and autonomous, but also designated specifically to kill is troublesome to say the least – raising the issue that has been compared to that of The Terminator – hence the name.

Turning now to Utilitarianism, I will attempt to further discuss the possible problems artificial intelligence may pose to humanity. Utilitarianism is an ethical viewpoint that implies that the best action is one that maximizes ‘utility’. The founder of Utilitarianism, Jeremy Bentham, described utility as ‘the sum of all pleasure that results from an action, minus the suffering of anyone involved in said action’. According to this view, the only standard of right versus wrong is the consequences that arise from an action. Having said this, the idea of autonomous cars becomes a bit more complex, and the idea of killer robots within the military immediately becomes almost impossible. If all sentient beings deserve equal ethical consideration, then building an intelligence specifically as a weapon is unacceptable according to this argument. One argument is for the idea that protecting one’s country and people is an acceptable reason for building such a weapon. However, can one argue that war and violence is for the greater good if others die because of it?

Autonomous cars, on the other hand, warrant further conversation and considerations over how precise the technology would be, the requirements of cost of production, distribution, availability, and accessibility would all need to be addressed. In addition, is it ethically responsible to completely remove the driver, taking away the driver’s ability to control their vehicle and rather depending solely on a computer that is possibly, and likely, more intelligent than the human within it. The idea of artificial intelligence seems fascinating, even awe-inspiring. But once one leaves the sci-fi realm in which the idea originated, the real world presents a far more problematic vision of it. When everyone merits moral consideration, the argument for this intelligent technology cannot be one sided. Several factors must be considered, topics must be discussed in depth and questions must be asked, often again and again. Do artificially intelligent machines cause greater happiness, greater good, or rather suffering and great misery? Where does one draw the line, and how can we determine their presence as better or worse than we find ourselves now?

 

Works Cited

Markoff, John. “New Research Center to Explore Ethics of Artificial Intelligence.” Nytimes.com. The New York Times, 01 Nov. 2016. Web. 15 Nov. 2016.

 

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s