Cyber Security, REAIM
The Political Philosophy of REAIM: Rethinking Autonomous Weapon Systems in A Just War Theory Perspective
By Kyungho Song
Research Fellow, Yonsei University)
December 5, 2024
  • #Global Issues
  • #Security & Defense
  • #Technology & Cybersecurity

► As autonomous weapon systems (AWS) based on AI are increasingly integrated into military operations worldwide, international cooperation needs to establish ethical norms, and the legal frameworks governing their use have become more pressing.

► A core principle of Just War Theory (JWT) is that military actions must be justifiable, and in this context, Explainable AI (XAI) plays a crucial role in the ethical deployment of AWS by clarifying decision-making processes and ensuring accountability.

► Despite AI advancements, AWS fundamentally lacks a moral compass, creating a responsibility gap that can only be bridged through human oversight and command responsibility, ensuring ethical accountability in high-stakes military contexts.

 

As militaries worldwide adopt AI to boost operational efficiency, the potential benefits are considerable: real-time data processing, improved precision in targeting, better resource allocation, faster decision-making, and a significant reduction in human risk—all of which contribute to more effective and adaptive military operations. However, these advancements also bring about fundamental ethical concerns and inherent risks, particularly with the increasing automation of warfare. At the core of these concerns is the need to reconcile AI's operational efficiency with the moral responsibilities and legal obligations governing war conduct.

 

As autonomous weapon systems (AWS) based on AI are increasingly integrated into military operations worldwide, international cooperation needs to establish ethical norms, and the legal frameworks governing their use have become more pressing. By creating global frameworks, nations can collaborate to prevent the misuse of AWS, ensuring that these technologies promote peace and security rather than escalating conflicts. In this context, the Responsible AI in the Military Domain Summit (REAIM) has emerged as a pivotal platform for addressing the ethical, legal, and operational challenges of incorporating AI into military systems. 

 

Building on the above, this essay examines how the principles of JWT can be applied to AWS while also exploring the broader implications for REAIM. Specifically, the essay will analyze how AWS can be integrated into military operations to minimize civilian harm, maintain accountability, and ensure that AI deployment adheres to the ethical standards outlined in JWT. This framework underscores REAIM’s mission to promote the responsible use of AI in military contexts by highlighting key issues such as command responsibility, explainable AI (XAI), and the development of military leaders capable of ensuring AWS's ethical and responsible use.

 

1. Overview of Just War Theory

 

JWT is a philosophical framework that provides ethical guidelines for evaluating the justification and conduct of war. Rooted in the belief that war, despite its inherent destructiveness, can sometimes be morally justified under certain conditions, JWT is typically divided into three main principles: jus ad bellum, jus in bello, and jus post bellum.

 

Jus ad Bellum outlines the conditions under which going to war is justified, including a just cause, legitimate authority, right intention, probability of success, proportionality, and last resort. Jus in Bello governs the conduct during war, focusing on the ethical treatment of combatants and non-combatants. Fundamental principles include discrimination, which requires combatants to distinguish between military targets and civilians, and proportionality, which ensures that the use of force is proportional to the military objective, avoiding excessive harm to civilians. Jus post Bellum addresses justice after conflict, emphasizing fair treatment for both the defeated and the victors, proportional reparations, and the restoration of political order and civil rights.

 

The challenge is to ensure that AWS, as a tool of modern warfare, aligns with these foundational ethical guidelines and does not undermine the principles of JWT. A significant portion of the moral issues surrounding AWS pertains to jus in bello, which ensures that warfare is conducted in a manner that respects ethical boundaries, particularly in how force is used and how non-combatants are treated.

 

2. Importance of Explainable AI (XAI) in AWS

 

A core principle of JWT is that military actions must be justifiable. In this context, XAI plays a crucial role in the ethical deployment of AWS. XAI refers to AI systems that can clearly explain their decision-making process to human operators, allowing them to verify that decisions are consistent with ethical standards and international law. If an AWS mistakenly targets a civilian, XAI can clarify why that decision was made, ensuring accountability and facilitating the learning process to improve future decisions. This transparency reinforces the moral authority of human commanders and ensures that military actions align with ethical principles and international law.

 

In the context of AWS, XAI serves two essential functions: (1) Ensuring Human Understanding and Validation: Human commanders must understand and validate the decisions made by AI systems. This is especially critical because AWS may be tasked with life-and-death decisions, such as determining who lives and dies in combat. With explainability, these decisions could become opaque, allowing human operators to assess the rationale behind AI actions. (2) Building Public Trust: As autonomous systems become more integrated into military operations, the public must be able to check that ethical standards are applied. XAI makes AI decisions transparent and understandable, demonstrating that AWS is used responsibly and with proper consideration for humanitarian concerns.

 

However, a significant remaining issue revolves around AI's trustworthiness in executing complex military tasks, such as target identification and elimination. While AI excels at processing vast data sets rapidly, its accuracy and reliability can vary significantly depending on the situation. In asymmetric warfare, for example, distinguishing between combatants and civilians may be difficult—especially when civilians are not wearing uniforms or are engaged in non-combatant roles—and AI’s ability to identify targets is not guaranteed.

 

This uncertainty underscores the need for flexibility in the use of AWS. A gradual adjustment in the level of automation based on AI’s confidence in its target identification is necessary. When AI systems have high confidence in identifying military targets, higher levels of automation may be acceptable. However, in uncertain scenarios or when civilian casualties are a concern, human intervention should be required to ensure accountability and minimize harm. In such cases, AI should serve as an advisory tool, providing data and recommendations. Still, the final decision should rest with a human commander who can consider ethical considerations.


 

3. The Role of Human Oversight and Command Responsibility in AWS

 

Despite AI advancements, more fundamentally, AWS fundamentally lacks a moral compass—an essential element in the complex ethical calculations required in warfare. This absence of moral agency creates a responsibility gap, which becomes particularly troubling in high-stakes military contexts, where decisions may result in significant loss of life or violations of international humanitarian law. 

 

In practice, this gap can only be bridged through human oversight. While integrating AI into military decision-making offers numerous advantages, the need for human intervention remains paramount. Responsibility, a central tenet of JWT, dictates that military leaders must bear ultimate accountability for the outcomes of military operations, including the actions of AWS. Thus, even as AWS becomes more capable of independent decision-making, human commanders must retain final authority over AI-generated decisions.

 

Mandating human intervention, however, could undermine the core purpose of integrating AI into military operations—enhancing efficiency, speed, and precision. While human oversight may reduce the efficiency gains offered by AI, it is crucial to ensure that ethical considerations are not overlooked, significantly when the risk of civilian harm is elevated. Therefore, integrating AI into military operations should not eliminate human oversight but instead, balance the benefits of automation with the necessity for ethical accountability.

 

In this context, as military AI systems become more advanced, the role of military leaders must evolve. Military leaders must have technical expertise and a deep understanding of the ethical implications of deploying AWS. They must be capable of intervening when AI makes questionable decisions, ensuring that AWS contributes to achieving military objectives while minimizing harm to civilians.


 

6. Conclusion

 

In conclusion, integrating AWS into military operations presents significant potential but must be approached carefully, considering the ethical guidelines outlined in JWT. By ensuring that human oversight remains integral to decision-making, especially in high-stakes scenarios, and adjusting the level of automation based on AI’s trustworthiness, it is possible to integrate AWS to minimize harm to civilians, maintain accountability, and align with international law.

 

Ultimately, the primary objective of deploying AWS should be to minimize collateral damage to both sides in the conflict, mainly civilian casualties, while achieving legitimate military objectives. This objective aligns with the principle of proportionality in JWT, which asserts that the harm caused by military actions should not exceed the anticipated military advantage. When deploying lethal AWS, it is critical to ensure that these systems only use deadly force when necessary and when the AI system has high confidence in identifying military targets. If the AI’s confidence is low or there is ambiguity about the target’s identity, human confirmation should be required to maintain proportionality and minimize the risk of civilian harm. Similarly, non-lethal AWS should be designed to hinder or deter rather than to kill, consistent with the principle of discrimination, which requires military actions to target only combatants, not civilians.

Dr. Kyungho Song is a political theorist and conceptual historian currently serving as a Postdoctoral Research Fellow in the Division of Solution-Seeking for Political Problems in the Age of Innovative Science and Technology at Yonsei University. He is also a Senior Researcher in the Climate Adaptation Living Lab R&D and the Yonsei Institute for North Korean Studies. As a founding member of “AI Five,” a collective of humanities scholars addressing the challenges of the AI Big Bang era, his recent research and writing focus on human rights, democracy, the climate crisis, artificial intelligence, and the evolving landscape of political science.

Related Articles