Lethal Autonomous Weapons Systems


Autonomous weapons systems locate, select, and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).

The UN has held four major meetings in Geneva under the auspices of the Convention on Certain Conventional Weapons, or CCW, to discuss the possibility of a treaty banning autonomous weapons. There is at present broad agreement on the need for "meaningful human control" over selection of targets and decisions to apply deadly force. Much work remains to be done on refining the necessary definitions and identifying exactly what should or should not be included in any proposed treaty.

Meanwhile, technology moves on. For example, the DARPA CODE program (Collaborative Operations in Denied Environments) is described by its program manager as developing aerial vehicles that operate "just as wolves hunt in coordinated packs with minimal communication".


Background reading


Media, publications, etc.