Abstract
This article argues that the unpredictability and unintuitive behavior of modern artificial intelligence provide more opportunities for users of autonomous weapon systems (AWS) to remain ignorant of risks posed by their systems to protected entities on the battlefield, and that this ignorance can be maintained even in iterative situations featuring a prior civilian casualty event. It demonstrates this theorem through illustrative targeting scenarios, before formalizing the argument through a model showing the evolution of an AWS-user’s mens rea as they receive notice of a prior incident, and choose to pursue—or not pursue—an inquiry. This analysis reveals a perverse incentive structure, where AWS-users are encouraged not to properly apply their precautionary and constant care obligations related to monitoring their weapons and conducting post-battle analyses, in order to dodge criminal liability. The article concludes with some thoughts and recommendations on how this reverse incentive structure can be corrected.
html