Abstract
Much of the debate surrounding the military use of artificial intelligence (AI) tends to focus on lethal autonomous weapons systems. Those are systems that, once activated, can select and engage targets without further human intervention; sometimes pejoratively called “killer robots.” Moreover, debates often focus on their use and risks in land warfare. This land-warfare focus tends to invoke questions about the systems’ ability to distinguish between combatants and civilians on urban battlefields and the potential for mistakes. Legal debates about the lawfulness of AI and lethal autonomous weapons systems in warfare similarly tend to focus on land warfare and thus the law of armed conflict as it applies specifically to that setting.
Centering legal debates primarily on lethal autonomous weapons systems in land warfare, however, risks overlooking important nuances in naval warfare and its governing law, the law of naval warfare. Naval warfare and the law of naval warfare differ in substantial ways from their land analogs. These differences may make naval warfare and the law of naval warfare more accommodating of AI and autonomous systems and mitigate some of the risks and concerns arising in debates about their use in land warfare. This article explores some of those nuances and differences in the law of naval warfare and how they can be conducive to AI and autonomous systems.
html
Accessibility Request
Some items in this repository were created or digitized prior to implementation of the accessibility standards under the Rehabilitation Act of 1973 and are preserved in their original, unmodified state for research, reference, or historical recordkeeping. In accordance with the ADA Title II Final Rule, the College provides accessible versions of archival materials upon request. To request a version of a file or resource, please submit an Accessible File Request Form.