The Pentagon has an idea for robots that can kill on their own. A small drone with “six whirring rotors” scan for targets with an attached camera. The drone is controlled on its own and not by humans. The drone is “armed with advanced artificial intelligence software, it had been transformed into a robot that could find and identify the half-dozen men carrying replicas of AK-47s around the village and pretending to be insurgents” (Rosenberg, Markoff). The weapon would be almost unnoticed and it’s the main strategy of maintaining the United States’ at the top of military power.
Although no human attitudes are involved this action can still be related to Kant. This invention was still created by a human. If the robots kill bad people that it can spot through the camera, it can be fulfilling its duty. If the duty is to kill, this robot may be a solution. Even though it might not be the best approach, as long is the duty is fulfilled the action is good. In a war setting, if users of the robot’s inclinations include winning a war and killing more people, this robot is a good solution for them.
Kant has the idea of categorical imperative that includes the universal law formulation that tests whether a particular action is a duty or to rest whether a particular action is rational. If these killing drones fulfill a duty, then Kant would approve. The debate is if it’s rational. Is the action consistent? If the killing isn’t accurate they fail and it’s waste money or even worse, could kill an innocent person which would be the opposite of action done for moral reasons.
Another aspect of Kant is the humanity formulation which states never treat humans merely as means. This is another idea that doesn’t fully agree with this invention. Killing can contradict the idea because you would not be treating them with dignity or worth, as humans are valuable as ends themselves.