Specialists are looking at a new kind of warfare– an intelligent drone armed with advanced A.I. designed to recognize enemy threats with a unique and possibly even unnerving prowess.
This drone is a part of the Pentagon’s plan for “autonomous and semiautonomous weapons” and raises alarms among scientists who worry that this invention and radical change within warfare and military power will begin a “robot arms race” (Rosenberg and Markoff, The Pentagon’s ‘Terminator Conundrum’).
Deputy defense secretary, Robert O. Work, explains: “What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past” (Rosenberg and Markoff, “The Pentagon’s ‘Terminator Conundrum'”). In addition, these weapons would offer a new and unmatched speed and precision in comparison to human soldiers and pilots. The Pentagon’s shift is what officials term “centaur warfare”– named after the Greek mythological creature. The strategy “emphasizes human control and autonomous weapons as ways to augment and magnify the creativity and problem-solving skills of soldiers, pilots and sailors, not replace them” (Rosenberg and Markoff, The Pentagon’s ‘Terminator Conundrum’).
The Pentagon worries that other world powers and enemies will create something “like the ‘Terminator'” a machine independent of its creators that can decide when and what to kill. The conundrum as of this moment is precisely how much independence to give to such a machine as the autonomous drones as well as creating one before other militaries to establish themselves as the dominant military power– hence the concern of the ‘robot arms race’.
However, in the viewpoint of Utilitarianism, the issue at hand becomes even more problematic and complex. While Utilitarianism seems fairly straightforward and impartial, it’s simple idea causes further concern regarding issues such as advancing warfare and military tactics. According to Utilitarianism, “the welfare of each person is equally morally valuable” . We can only develop a truly moral outlook when we broaden the scope of our concern to focus on the whole, rather than just ourselves or matters directly concerned to us. If Utilitarianism, with it’s single ultimate rule of “maximize well-being”, is applied here, how precisely does it relate to military ideals and tactics? With its judgment of actions and intentions being separate, one may look at this topic by judging both. Actions are right provided they are optimific. Intentions are morally good provided they are reasonably expected to yield good results. For example, the intention of the Pentagon’s military plans is, ideally speaking, to assert dominance over other military powers to protect it’s own nation’s security and place in an ever-unstable environment. To protect it’s people and country– these are intentions that Utilitarianists would praise as good intentions. However, the action itself would cause great harm to others, as war often does. While the intentions of security and protection are admirable, ultimately it would cause great suffering and thus the actions cannot be deemed as right– rather Utilitarianists would likely condemn such practices as they cause misery. Consequentialists insist that one’s moral duty is to make the world the best it can be– and that one must judge this based on actual and not expected results. In this case, building an effective weapon that will likely cause harm to other people cannot be deemed good.
In addition, this situation presents an example of a ‘slippery slope argument’. This argument states that allowing a certain action will lead to awful results in the bigger picture– that there will be serious, avoidable harm as a result of some new policy or practice. In addition, it argues that we are required to choose the option that maximizes happiness and reduces misery and suffering. Anything else would be “both short-sighted and immoral” (Shafer-Landau, The Fundamentals of Ethics). Stated in the concern the Pentagon currently has over this new warfare concept, where does one stop? How much independence does one give to a weapon that’s ability greatly outmatches any human’s? The ‘Terminator Conundrum’ essentially worries of a machine that greatly overpowers its creators and causes great harm to enemy and ally alike– in what situation, all intentions set aside, would this cause good rather than great suffering?
Rosenberg, Matthew, and John Markoff. “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own.” Nytimes.com. The New York Times, 25 Oct. 2016. Web. 25 Oct. 2016.
Shafer-Landau, Russ. The Fundamentals of Ethics. New York: Oxford UP, 2010. Print.