One of the most intriguing and important discussions in international law is the potential impact of emerging technologies on the law of armed conflict (LOAC), including weapons that incorporate machine learning and/or artificial intelligence. Because one of the likely characteristics of these advanced weapons would be the ability to make decisions implicating life and death on the battlefield, these discussions have highlighted a fundamental question concerning the LOAC: Does the law regulating armed conflict require human input in selecting and engaging targets or can that decision be made without human input? This article analyzes views expressed by scholars and NGOs, but focuses on views expressed by States, many of which have been publicized as part of the discussions of States Parties to the Convention on Certain Conventional Weapons. These differing State views make clear that States have not yet come to a consensus on the legal requirement of human decision making for LOAC compliance. Given that lack of consensus, one can only conclude that the law does not currently require a human decision for selecting and engaging targets. Indeed, though the international community may come to such a decision, it has not yet done so. Accordingly, States should continue to research and develop weapons that incorporate machine learning and artificial intelligence because such weapons offer the promise of not only greater LOAC compliance, but also increased opportunities to provide new and creative protections.