A research center is underway at Carnegie Mellon University – its primary focus being centered around the ethics of artificial intelligence. The K & L Gates Endowment for Ethics and Computational Technologies is being established due to a growing rate of concern regarding the potential impacts of various artificial intelligent technologies.
The White House hosted a series of workshops around the country to discuss possible impacts A.I. technology may have on the world, and in October released a report of said possible consequences.
In response to this, five technology firms – Amazon, Facebook, Google, IBM and Microsoft – all banded together to form a partnership in order to assist in establishing ethical guidelines for the design and use of A.I. tech.
According to Subra Suresh, as A.I. technology advanced, it was absolutely necessary that the topic of ethics be brought up – especially as the United States military began studying autonomous weapons – and many have criticized against weapons that have the ability to make killing decisions on their own outright. Quoting Suresh, “We are at a unique point in time where the technology is far ahead of society’s ability to restrain it.” (Markoff, 2016).
In addition, current studies and trials are attempting to establish autonomous, or “self-driving” vehicles. The beginnings of artificial intelligence began in the 1950s when several faculty members at Carnegie Mellon developed a software that showed how computer algorithms could intelligently solve problems.
Due to the potential impact of A.I. technology on both culture and economy, it is essential that society makes thoughtful, ethical choices about how these “intelligent machines” are utilized.
As Peter J. Kalis, chairman of the law firm, states, “Carnegie Mellon resides at the intersection of many disciplines, […] It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”
It is refreshing to witness that people are taking the notion of intelligent tech seriously and, before it is even fully established or practiced, are already taking into account the ethical dilemma that some of the tech may, and likely, will present. Such questions as how much freedom should things such as autonomous cars or intelligent military drones be given as well as the impact on all of humanity are a few recurrent topics that need in-depth discussion. Some argue the dangers of creating intelligent tech that could easily outthink and outsmart us very quickly – as Suresh put it, tech that far exceeds our ability, as humans, to “restrain it.”
However, this being said, I now turn to the perspective of Kantian ethics to further explore this issue of man and intelligent machine. What possible draw backs may be seen? What ethical considerations do these inventions present? What responsibilities do those in charge have in regards to these machines and their overall effect on humanity?
Kant’s formulation within ethics is termed the “categorical imperative” – from this he established a total of four further formulations. As rational beings, we cannot simply “opt out” of this formulation, just as we ‘cannot’ opt out of being rational beings. When attempting to decipher the ethical problems and solutions regarding artificial technology, Kant’s four formulations of universalizability, treating humanity as an end in itself, formula of autonomy, as well as the ‘kingdom of ends’ are all valid stances to consider.
In the formula of universalizability, an act is only permissible if it still functions as morally sound when applied to all humans. If an action ceases to make sense when applied on a universal scale, that action fails this first test.
In ‘treating humanity as an end in itself’, Kant argues that rational beings can never be treated merely as means to an end – meaning that their reasoned motives must be equally respected.
Within the formula autonomy, Kant claims that an agent is required to follow the categorical imperative, not from any external influence, but rather due to rational will – this requires others to take into account the rights of others when making decisions.
Lastly, the ‘kingdom of ends’ simply insists upon acting on principles that a body of rational agents would accept as proper laws to follow. This would depend on rules that can be applied to all members of a society without treating any person as a means to an end.
When we apply all these formulations to the current issue at hand, the developing of ethical standards and considerations when implementing advanced, artificially intelligent technology into our society, one realizes that all decisions considered ‘ethical’ will take into account whether or not an action can be applied to all rational agents, whether it respects these rational agents’ and does not treat them as a means to an end, that these decisions are followed not due to any external force but rather one’s insistence upon rational will, and that all decisions or actions committed would be acceptable as law amongst beings of rational will. It quickly gets precarious on this matter as the uncertainty of advanced technology comes into play. When creating a technology that makes decisions of its own, learns and adapts to new situation, it is nearly impossible to declare whether implementing these machines into our society and everyday world is good or bad. One could safely say that it is problematic in the least – while we may stick to the categorical imperative, what rules will these machines abide by? What formulations would they consider and do they have any commitment to the same regulations of ‘rational beings’ that we do?
Markoff, John. “New Research Center to Explore Ethics of Artificial Intelligence.” Nytimes.com. The New York Times, 01 Nov. 2016. Web. 15 Nov. 2016.