The Enormous Dangers of Military Artificial Intelligence Reveal the Need for International Regulation 

Artificial intelligence AI Military Soldier

Artificial intelligence (AI) is bound to be a major technological force that will reshape the 21st century. But its reverberating effects will not be confined to the technological; the evolution of AI will greatly influence other spheres, particularly in the military and international relations realms. Moreover, the increasing use of AI in the military sphere and the growing intelligence of AI―given its increasing ability to address ever more complex tasks and the imminent emergence of artificial “general intelligence”―will accelerate the timing and intensity of its impacts.

Militaries worldwide, especially among the major powers, have been integrating AI into their military strategies, conventional weapons, and even their nuclear command structure. One example of the military use of AI occurred in March 2020 when, according to a UN report, a “lethal autonomous weapons system” was deployed in Libya. The greatest threats posed by the military use of AI reside in the development of autonomous weapons systems that do not require human oversight and the increased use of automated battlefield decision-making systems, which are vulnerable to manipulation by adversaries. Automated decision making is particularly risky in the context of nuclear weapons.

Furthermore, the military use of AI may degrade international peace and stability given the risks of accidents in the AI software, unintentional conflict due to concerns about how the AI systems will be used, and inadvertent escalation of conflict stemming from the inflexibility of AI systems and human overreliance on them. In sum, the increased deployment of autonomous weapons systems and automated battlefield decision-making systems could enhance great power conflict and greatly undermine strategic stability, potentially driving a country to launch nuclear weapons.

Given what is at stake and the global nature of this technological actor, it is vital that the international community unite to establish global norms, regulations, and, when necessary, institutions for the safe and responsible development and use of AI in military contexts. Various state actors and think tanks support stronger international cooperation to address the military use of AI, including the US government, which recently released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” calling for international humanitarian law to guide the development and implementation of AI in the military. Proactively shaping the development of military AI, rather than reacting haphazardly to its effects, will allow humankind to leverage the benefits of AI and minimize the threats it poses to a peaceful society.

So, how can we leverage global governance to effectively address the dangers of the military use of AI? Michael Klare, a Senior Fellow at the Arms Control Association (ACA), advocates for a framework that suggests starting with non-binding, Track-2 diplomacy (among scientists, arms control experts, etc.) and unilateral/bilateral initiatives and then advancing toward “strategic stability” talks and formal, binding treaties. The “strategic stability” talks would resemble the current US-Russia Strategic Stability Dialogue but would also include powers like China. Furthermore, this framework stresses the importance of confidence-building measures (CBMs) to enhance trust between the relevant parties. While the US could assume leadership to propose CBMs such as a “Dialogue on AI Safety and Strategic Stability” and standard-setting for the military use of AI, it is critical that all countries participate equally in this process. While CBMs are critical, it is important to note that many civil society groups, including the ACA, stress the importance of binding, enforceable international agreements that regulate the use of AI in military contexts.

So, what about international treaties to govern the military uses of AI? The ACA and countries like Spain and Mexico support regulating lethal autonomous weapons systems through the Convention on Certain Conventional Weapons framework treaty, a position that UN Secretary-General Antonio Guterres supports. Another idea- supported by WFM/IGP- is to negotiate a UN Framework Convention on AI (similar to the UNFCCC) which would drive further international negotiations regarding the creation and implementation of ethical AI principles in military and other contexts. Furthermore, The Millennium Project supports a UN Treaty on Artificial General Intelligence, which would help set the “initial conditions” for artificial general intelligence, including in the military domain.

And what about international institutions? Proposals have been advanced for establishing an “International Artificial Intelligence Agency” that would act like the International Atomic Energy Agency (IAEA). However, the ability to effectively monitor AI technology could prove more difficult than nuclear technology, and adequate enforcement remains a persistent issue. Additionally, an Intergovernmental Panel for Artificial Intelligence (IPAI), supported by government officials like France’s Emmanuel Macron, could complement a UN Framework Convention on AI and enhance inclusivity in AI global governance.

Clearly, the world public should play a role in shaping the future of AI, especially in military contexts. The UN’s Global Digital Compact is a good start. However, given the fast-paced evolution of AI and other emerging technologies, it is critical to enhance the opportunities for individual and civil society input in AI global governance; a single dialogue is not enough. One idea to streamline worldwide public participation is to establish an International Science & Technology Organization, as proposed by the Millennium Project. An International S & T Organization would be an “online collective intelligence platform” that could facilitate a continuous dialogue among members of the global community regarding science and technology, including new military uses of AI.

To create the conditions for lasting peace and stability in a world increasingly shaped by emerging technologies, it is crucial to develop and implement AI technologies purposefully and in consultation with all of humanity. And developing these technologies “the right way”―in accordance with our values―may necessitate slowing down and prioritizing how we deploy these technologies rather than how quickly we do so. One need only look at the recent US Supreme Court case challenging the immunity that internet and social media companies enjoy. Is it not preferable to establish a strong, just foundation for these technologies from the outset? Or would we rather suffer the unintended consequences of an unrestrained global obsession with military dominance?

Jacopo DeMarinis is a graduate from the University of Illinois at Urbana-Champaign and is currently the Social Media and Communications Coordinator at Citizens for Global Solutions, a grassroots organization that promotes world government. He is pursuing a career in peacebuilding and conflict resolution, and plans on obtaining a Master’s in Conflict Transformation and Social Justice from Queen’s University Belfast. Ultimately, he aspires to work for the United Nations’ Department of Political and Peacebuilding Affairs. He can be reached at [email protected].

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter

GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX

Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Quest of immortality

Quest of immortality has been as timeless as it has remained fruitless. First recorded over 4000 years ago on the Babylonian Tablets, it narrates the tale of King Gilgamesh of Uruk. His…

Join Our Newsletter


Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News