Cyber Security, REAIM
The Impact of AI Advancements on Cyber Warfare: An International Legal Perspective
By Myung-Hyun Chung
Senior Researcher, Korea University Legal Research Institute
November 20, 2024
  • #Global Issues
  • #Technology & Cybersecurity

▶ AI's Transformative Role in Cyber Warfare: The integration of AI into cyber operations enhances both offensive and defensive capabilities, enabling automation, precision targeting, and real-time threat detection, but also scales the complexity and impact of cyber attacks.

▶ Legal Challenges of AI-Driven Cyber Operations: The rise of AI in cyber warfare complicates international law by raising questions about defining cyber attacks, attributing responsibility for autonomous actions, and addressing non-state actors' use of AI technologies.

▶ Need for Enhanced International Cooperation and Legal Frameworks: Developing clear definitions, accountability mechanisms, and integrating AI into International Humanitarian Law are essential for regulating AI-driven cyber warfare. Strengthened global collaboration and transparency are crucial for addressing these emerging challenges.

 


 

1. Introduction

 

The rapid advancement of artificial intelligence (AI) is fundamentally reshaping the landscape of cyber warfare. Cyber warfare refers to hostile activities in the digital realm, typically involving attacks on or disruptions of information systems, networks, and infrastructure. AI’s integration into cyber operations increases both the efficiency and scale of attacks, while also introducing new legal complexities. From an international legal perspective, this presents significant challenges in terms of defining, regulating, and addressing the consequences of AI-driven cyber operations. This article examines how the development of AI is impacting cyber warfare and explores key legal issues that arise from this intersection.

 

2. The Role of AI in Cyber Warfare

 

AI’s involvement in cyber warfare manifests in several critical ways.

 

First, automation of attacks has dramatically increased. AI-enabled cyber tools can autonomously identify vulnerabilities, penetrate networks, and launch attacks at speeds and scales that are far beyond human capabilities. This allows for rapid and large-scale operations with minimal human intervention. Second, increased precision in targeting is another significant benefit. AI systems can analyze vast amounts of data, select high-value targets, and carry out highly sophisticated attacks that minimize collateral damage while maximizing strategic impact. This precision allows attackers to focus on critical infrastructure or key military targets with minimal risk of detection. Third, AI improves defensive capabilities by automating threat detection and response. Machine learning algorithms can identify anomalous behaviors in real time, detect cyber threats, and autonomously deploy response measures, thus enhancing the speed and effectiveness of defensive actions.

 

 

3. Legal Issues in the Age of AI-Driven Cyber Warfare

 

The rise of AI in cyber warfare brings several complicated legal challenges. These issues are particularly significant from an international law perspective and raise important questions about the responsibility, accountability, and regulation of AI-driven attacks.

 

3.1 Defining Cyber Attack and Threshold

 

Currently, international law lacks a clear and universally accepted definition of a "cyber attack." Traditional legal frameworks, such as those governing the use of force, are centered around physical damage or military aggression. However, the deployment of AI-driven cyber attack raises the question of how to categorize these actions. For example, if an AI system autonomously launches a cyber attack targeting critical infrastructure, should this be treated as an act of war under international law? The lack of clarity in defining what constitutes a cyber attack or use of force makes it difficult to apply existing legal framework to AI-driven operations.

 

3.2 Responsibility and Autonomous Systems

 

One of the most challenging issues arises from determining responsibility for actions taken by AI systems. If an AI system autonomously carries out a cyber attack that causes damage to another state, identifying who is legally liable—whether it’s the state that developed or deployed the AI, the manufacturer, or the AI system itself—becomes complex. The traditional legal principle of state responsibility might not fully apply when autonomous systems are involved, as human agents are not always directly responsible for the actions of AI systems. This raises significant concerns regarding accountability, as states may be reluctant to take responsibility for actions that were initiated by AI without human oversight.

 

3.3 Non-State Actors and Cyber Warfare

 

AI also allows non-state actors (such as terrorist groups or cybercriminals) to execute cyber attacks at a level of sophistication previously reserved for state actors. With access to AI tools, non-state actors can target critical infrastructure, cause significant disruptions, and even orchestrate widespread damage. This raises the question of how international law applies to non-state actors in the context of cyber warfare. Traditional laws of armed conflict and conduct of hostilities are generally designed to regulate state-to-state interactions, leaving a legal gap when non-state actors are involved in cyber conflict.

 

3.4 International Humanitarian Law (IHL)

 

AI’s impact on International Humanitarian Law (IHL), which governs the conduct of armed conflict, is another critical issue. IHL emphasizes the distinction between combatants and civilians and mandates the protection of civilian populations from unnecessary harm. However, AI-driven cyber attacks—especially those executed autonomously—raise questions about how IHL principles such as distinction, proportionality, and precaution can be applied. Autonomous systems may not fully respect the principles of proportionality (ensuring that the harm caused does not outweigh the military advantage) or distinction (targeting only combatants and military objectives), leading to potential violations of IHL. The development of AI weapons that can operate in cyber space further complicates how to ensure compliance with these core humanitarian principles.

 

3.5 The Issue of Attribution

 

One of the most significant challenges posed by AI in cyber warfare is the issue of attribution - the process of identifying the actor behind a cyber attack. Attribution has always been a central problem in cyberspace, but the advent of AI in cyber warfare exacerbates this issue due to the autonomous and often anonymized nature of AI-driven cyber attacks.

 

 - Challenges of Attribution in Cyberspace

 

Attribution is crucial for determining the legal responsibility for an attack. Under international law, states are typically held responsible for actions that violate another state's sovereignty or cause damage to its infrastructure. In the case of cyber attacks, attribution determines whether the attack can be categorized as an act of war, a violation of sovereignty, or an international crime. However, cyber attacks, particularly those driven by AI, are inherently difficult to attribute for several reasons: anonymity and concealment, lack of clear indicators, action by proxy and non-state actors.

 

  - International Legal Framework and Attribution

 

International law requires that the actor behind a hostile cyber operation be identified in order to assign legal responsibility and to decide whether a particular attack constitutes a violation of a state's sovereignty, an act of war, or a violation of international humanitarian law.

 

Under the UN Charter, the use of force by a state against another state is prohibited, unless it is in self-defense or authorized by the Security Council. For cyber attacks to fall within this framework, they must be attributable to a state or a state-sponsored actor. The key challenge is how to attribute responsibility for cyber attacks that involve sophisticated AI tools that may be operated with little or no direct human oversight. International law does not yet have clear guidelines on how to handle situations where an attack is carried out by an AI system that acts outside the direct control or oversight of a state or its military.

 

  - The Need for International Cooperation

 

The complexities of AI-driven cyber attacks underline the need for stronger international cooperation on attribution and accountability in cyberspace. Currently, there is no universally accepted standard for cyber attribution, and the lack of consistent mechanisms for investigation and response leaves states vulnerable to cyber threats that can be difficult to track and response.

 

For the international community to hold states accountable for AI-enabled cyber attacks, states should agree on standards for cyber attribution. This may involve:

 

▶ Joint Cyber Investigations: States could cooperate to share intelligence and resources for investigating and attributing cyber attacks, particularly those involving AI-driven technologies.

▶ Attribution Standards: The development of common standards for attributing cyber attacks would help to establish clearer criteria for determining whether an attack should be treated as an act of war or an international crime, and which party should be held responsible.

▶ Transparency and Reporting: States could commit to greater transparency in disclosing cyber incidents and attacks, making it easier to identify patterns of behavior and assign accountability.

 

 

3.6 Cybersecurity Cooperation and Regulation

 

The increasing use of AI in cyber warfare emphasizes the need for stronger international cybersecurity cooperation and regulation. Existing international norms and treaties, such as the UN Group of Governmental Experts (UNGGE) on cybersecurity, offer some frameworks for cooperation, but they are not sufficient to address the unique challenges posed by AI. International legal frameworks need to evolve to include clear guidelines on the use of AI in cyber warfare, establishing standards for state behavior, and mechanisms for enforcing compliance. As AI technology evolves rapidly, international law will need to adapt in real-time to address emerging risks and ensure that states and non-state actors alike operate within an agreed-upon set of norms.

 

4. Conclusion: Way Forward with International Legal Frameworks

 

AI’s growing role in cyber warfare necessitates the development of new international legal framework to address its implications as following:

Defining Cyber Attacks and State Responsibility: Clear legal definitions of what constitutes a cyber attack in the context of AI-driven actions must be established. These definitions should also address how responsibility is determined, particularly when autonomous systems are involved.

 

 Establishing Accountability Mechanisms: New international frameworks must address the accountability of states and non-state actors who use AI in cyber warfare, ensuring that legal responsibility for cyber actions is clear.

 Integrating AI into International Humanitarian Law: There is a need for a clearer understanding of how IHL applies to AI-driven cyber operations, ensuring that humanitarian principles are upheld even in the digital realm.

 Strengthening International Cooperation: Given the global nature of cyber warfare, states must collaborate more effectively in developing binding agreements and cybersecurity standards that govern the use of AI in conflict. Attribution remains one of the most complex challenges in international law when it comes to AI-enabled cyber warfare. Information sharing through international cooperation could support decision-making effort for attribution and enhance to apply international law to cyber operations.

 

As AI technology continues to advance, international law will need to address these challenges by developing more robust frameworks for cyber attribution, state responsibility, and accountability. Only through greater international cooperation, transparency, and the establishment of clear legal framework can the international community effectively manage the risks posed by AI in cyber warfare.

Myung-Hyun Chung is a research professor at Korea University Legal Research Institute and the vice director of Korea University Cyber Law Centre. She graduated from the University of Iowa (LL.M) and Korea University with Ph.D. in international law. Her research interests include international trade law, intellectual property law, data protection, cyber security, and digital trade. She has been involved in a number of governmental projects including Ministry of Foreign Affairs, Ministry of Industry and Trade, National Security Research Institute, Personal Information Protection Commission, etc. She is a member of the Korean Society of International Law, Korean Society of International Economic Law, and a board member of International Cyber Law Studies in Korea and Korean Society of Trade Remedies.

Related Articles