' Killer Robots on the Horizon for Weapons Technology | MTTLR

Killer Robots on the Horizon for Weapons Technology

With advances in technology and artificial intelligence, the development of fully autonomous weapons has become closer to reality. Lethal Autonomous Weapons Systems (LAWS), more commonly known as “killer robots,” are different from the drones utilized by the military today in that they will be capable of selecting and engaging targets independently. The concern over the potential ramifications of LAWS has led to an international discussion which is riddled with moral undertones; the prospect of giving a robot the “choice” and power to kill seems wrong to humanitarian rights groups. The effect LAWS could have on military operations is astounding: with autonomous robots in play there is a reasonable possibility of completely removed and emotionless combat. The fear is that these killer robots will be able to make the ultimate decision regarding who lives and dies. The use of LAWS will undoubtedly affect international relations and pose a serious challenge for international law.

There is a debate about whether killer robots will even be permitted in warfare, due to possible compliance issues with the international obligations set forth in the UN Charter and the Geneva Convention. As of now, LAWS are not being utilized in the field since they are not yet completely operational, but the United Nations is seeking to anticipate the potential issues and address the problem before the situation spirals and a race to the bottom ensues. The United Nations met in May to consider the potential social and legal implications of killer robots, and one major legal issue is liability.

There is some ambiguity regarding who will be held liable if a robot “commits” a war crime or human rights violation. Should the manufacturer be held liable? The military commander? The programmer? The robot itself? Would an autonomous robot even qualify for personhood in a liability context? International humanitarian and human rights law demands that responsibility be determined should research continue and LAWS come to fruition. This is a complicated question given that while the seemingly obvious answer would be the military commander (absent some sort of product defect, in which case the manufacturer or programmer would be liable), the commander does not have complete control over the robot. LAWS will theoretically be able to identify and engage targets based on an algorithm that was created by a programmer. So if the robot screws up and kills a civilian, is it the programmer’s fault due to a glitch in the code or the commander’s fault for not carefully monitoring the robot’s activities and catching the mistake?

117 States are parties to the Convention on Certain Conventional Weapons, which was created to restrict the use of certain types of weapons that may affect civilians indiscriminately, an umbrella that killer robots certainly fit beneath. The addition of LAWS to the banned weapons covered by the Convention may prove to be critical in preventing a race to the bottom among countries with the technological capability of producing killer robots. The outcome of the UN’s Geneva discussions will be reviewed at the formal conference of the Convention on Certain Conventional Weapons later this month, where states will discuss possible next steps on autonomous weapons.

Submit a Comment

Your email address will not be published. Required fields are marked *