•  
  •  
 
International Law Studies

Abstract

Israel’s military campaign in Gaza as well as ongoing conflicts in Ukraine, Yemen, Iraq, and Syria include the reported military use of AI-enabled decision-support systems (AI-DSS) within the joint targeting cycle (JTC). These tools use AI techniques to collect and analyze data, provide information about the operational environment, and make actionable recommendations with the aim of aiding military decision-makers in evaluating factors relevant to legal compliance, such as taking precautions and ensuring proportionality in attacks. These systems are often touted as being simply a human aid and, as such, have flown largely under the radar regarding regulation as they are perceived to fall short of fully autonomous systems. We challenge this narrative. In this article, we evaluate the effects of AI-DSS reportedly used within the JTC by elucidating several factors that, especially if considered in combination, warrant increased attention from a legal perspective. These include the effects of speed and scale of AI-enabled targeting on human judgment, inherent error risks of the systems (including accuracy issues), and cognitive biases that could potentially lead to or even engender IHL violations, specifically focusing on the principle of precautions in attack. Within the topic of AI, law, and fundamental values/principles in society, our contribution aims to critically engage with claims that the speed and scale AI offers on the battlefield is without massive risks in an existential way, speaking to what kinds of wars we will be fighting, what it means to fight war augmented by AI-enabled systems, and what it means essentially to be human(e) in war in the age of AI.

html

Share

COinS