Abstract
States and scholars recognize legal reviews of weapons, means or methods of warfare as an essential tool to ensure the legality of military applications of artificial intelligence (AI). Yet, are existing practices fit for this task? This article identifies necessary adaptations to current practices. For AI-enabled systems that are used in relation to targeting, legal reviews need to assess the systems’ compliance with additional rules of international law, in particular targeting law under international humanitarian law (IHL). This article discusses the procedural ramifications thereof. The article further finds that AI systems’ predictability problem needs to be addressed by the technical process of verification and validation, a process that generally precedes legal reviews. The article argues that ultimately, as the law needs to be translated into technical specifications understandable by the AI system, the technical and legal assessment conflate into one. While this implies several consequences, the article suggests that emerging guidelines on the development and use of AI by states and industry can provide elements for the development of new guidance for the legal assessment of AI-driven systems. The article concludes that legal reviews become even more important for AI technology than for traditional weapons because with increased human reliance on AI, more attention must go to a system’s legality.
html
Included in
International Humanitarian Law Commons, International Law Commons, Military, War, and Peace Commons, National Security Law Commons