Abstract
This article is the third installment of a three-part series on AI-enabled weapons and human control. Artificial intelligence (AI) is shaping debates about military technology by challenging the role of human decision-making in the use of autonomous weapon systems (AWS). This article argues that effective governance of AI-enabled AWS requires moving beyond narrow conceptions of “meaningful human control” and instead recognizing a network of embedded human judgment throughout the weapon system’s lifecycle. This article focuses on the operator stage, examining the unique role of operators through guiding, observing, and terminating deployed AWS. Drawing on policy debates, doctrinal frameworks, and empirical examples, the article highlights the cognitive and practical limits of operators, including the speed of machine decision-making, the risk of automation bias, and the challenge of maintaining vigilance during long operations. Still, it shows that operators remain vital in embedding human judgment by authorizing updates, supervising system learning, and ensuring alignment with command intent. By placing operators within a broader network of actors, which also includes software designers and commanders, the series illustrates how human judgment can be integrated at multiple decision points. This lifecycle perspective emphasizes that military AI depends on distributed human involvement, providing a more realistic foundation for lawful and effective AWS governance.
html