BP #8: The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own

The Pentagon has an idea for robots that can kill on their own. A small drone with “six whirring rotors” scan for targets with an attached camera. The drone is controlled on its own and not by humans. The drone is “armed with advanced artificial intelligence software, it had been transformed into a robot that could find and identify the half-dozen men carrying replicas of AK-47s around the village and pretending to be insurgents” (Rosenberg, Markoff). The weapon would be almost unnoticed and it’s the main strategy of maintaining the United States’ at the top of military power.

Although no human attitudes are involved this action can still be related to Kant. This invention was still created by a human. If the robots kill bad people that it can spot through the camera, it can be fulfilling its duty. If the duty is to kill, this robot may be a solution. Even though it might not be the best approach, as long is the duty is fulfilled the action is good. In a war setting, if users of the robot’s inclinations include winning a war and killing more people, this robot is a good solution for them.

Kant has the idea of categorical imperative that includes the universal law formulation that tests whether a particular action is a duty or to rest whether a particular action is rational. If these killing drones fulfill a duty, then Kant would approve. The debate is if it’s rational. Is the action consistent? If the killing isn’t accurate they fail and it’s waste money or even worse, could kill an innocent person which would be the opposite of action done for moral reasons.

Another aspect of Kant is the humanity formulation which states never treat humans merely as means. This is another idea that doesn’t fully agree with this invention. Killing can contradict the idea because you would not be treating them with dignity or worth, as humans are valuable as ends themselves.

http://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html

Advertisements

2 thoughts on “BP #8: The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own

  1. The idea that the pentagon has an idea for creating artificially intelligent robots that can kill is quite interesting. I agree that parts of the idea go hand and hand with Kant’s beliefs, but in other ways, such as not treating the individuals killed with dignity, do not match up with Kant. Very nice ethical analysis of these potential robots. good read!

    Liked by 1 person

    1. The idea of Kant wanting to treat people with dignity and worth comes from the humanity formulation. I think killing people especially if it were on accident, would be example of failing that particular formulation.

      Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s