JS

11 - Autonomous Weapons System

Title: Adapting the Law of Armed Conflict to Autonomous Weapon Systems

I. Introduction

  • Autonomous weapon systems (AWS) are increasingly automated military tools that can engage targets without human intervention.

  • Deployment of some AWS has been observed for years, with anticipated advances causing both military and public concern.

  • The UN's discussion on AWS in May 2014 highlighted legal and ethical implications, calling for coherent normative developments.

  • Defining AWS: "Systems that, once activated, can select and engage targets without human operator intervention (DoD Directive 2012)."

  • Key Questions Addressed:

    • What differentiates AWS from other weapon systems?

    • Can AWS be regulated under the existing law of armed conflict (LOAC)?

    • If regulation is possible, how should it be done?

  • Conclusion: AWS require adaptation of existing legal frameworks, rejecting outright prohibition as misguided.

  • Proposed a three-tiered regulation approach for development, deployment, and use of AWS.

II. Incremental Development of Autonomous Weapon Systems

  • AWS have been conceptualized in various forms for years (e.g., landmines).

  • Current focus: technologically advanced systems that select targets intentionally, not merely detonate on contact.

  • Existing systems typically function in defensive contexts with low risk to civilians.

  • Gradual integration of automation in military settings is a response to the need for faster decision-making and operational efficiency.

  • Genuine autonomy in AWS may remain rare due to operational and technological limitations.

III. Misguided Calls to Prohibit Autonomous Weapons Per Se

  • Critics claim that AWS violate ethical standards, requiring a total ban due to precision and accountability issues.

  • Arguments against AWS bans include:

    • Doubts about technology's ability to meet legal/ethical standards.

    • Ethical principles require human oversight.

    • Calls for bans on systems exceeding specific autonomy levels, limiting human control in weapon decisions.

  • Significant concerns related to human emotional factors and strategic misuse of AWS exist but are not confined to autonomous technologies.

IV. Regulating the Use of Autonomous Weapons Under the Law of Armed Conflict

  • Current LOAC applies to newly developed weapon systems, requiring compatibility with LOAC.

  • Article 36 of the Additional Protocol I mandates legal reviews of new weapons' designs to prevent unlawful use.

  • Legal Review Criteria:

    1. Must not be indiscriminate by nature.

    2. Cannot cause unnecessary suffering or superfluous injury.

    3. Must not have uncontrollable harmful effects.

  • Possibility of using AWS legally hinges on context and operational purposes; technology varies in effectiveness.

V. Developing and Cultivating Legal Rules and Codes of Conduct

  • The need for clear international regulations governing AWS is paramount, encompassing their design, deployment, and oversight.

  • A three-tiered regulatory framework is proposed:

    1. International Instrument: Clarifying AWS regulation under existing LOAC and outlining commanders' responsibilities.

    2. National Policies: Each state should define its rules and policies regarding AWS development while maintaining operational secrecy where necessary.

    3. Industry Standards: Collaboration between developers, military, and legal advisors to ensure compliance with legal standards during AWS systems' design.

  • The gradual development of legal standards is essential to adapt to technological advancements while maintaining core humanitarian principles associated with the law of armed conflict.

VI. Conclusion

  • The introduction of AWS necessitates a responsible path forward, focused on adaptation rather than prohibition.

  • AWS are not inherently illegal but raise significant challenges that must be addressed through established international legal frameworks.

  • Recommended three-tiered approach aims to ensure adherence to LOAC principles, balancing technological advancement with the protection of humanitarian standards.

A Legal Perspective: Autonomous Weapon Systems Under International Humanitarian Law

I. Introduction

  • Overview of the legal implications of autonomous weapon systems under International Humanitarian Law (IHL).

  • Autonomous weapon system definition: Weapon systems operating with autonomy in critical functions, able to select and attack targets without human involvement.

  • Cited ICRC documents for context and groundwork for this analysis.

II. Compliance with International Humanitarian Law

  • Autonomous weapon systems are not explicitly regulated by IHL treaties.

  • Responsibility falls on states developing and deploying such systems to ensure compliance with IHL.

  • Core obligations include:

    • Ensuring distinction between military and civilian targets.

    • Evaluating proportionality to avoid excessive civilian harm.

    • Implementing precautions in attack to prevent violations.

  • Legal accountability cannot be delegated to machines or programs—remains with human actors involved in weapon deployment.

III. Commanders' Judgement with Autonomous Systems

  • Commanders must ensure that systems do not impede legal judgements necessary for IHL compliance.

  • Concerns arise if weapon systems function autonomously without direct human oversight, potentially undermining responsibility and legal obligations.

IV. The Martens Clause

  • The Martens Clause emphasizes ethical reflections alongside IHL, affirming human protections against new warfare techniques.

  • Concerns about human decision-making being supplanted by autonomous systems in lethal scenarios.

V. Legal Reviews of Autonomous Weapons

  • Following Article 36 of Additional Protocol I, states must legally review new weapons to ensure compliance with IHL.

  • Legal assessments should focus on characteristics of weapon systems and their operational reliability and predictability.

  • Compliance also depends on understanding and foreseeing the effects of the weapon in varied scenarios.

VI. Human Control Considerations

  • The necessity for "meaningful" human control identified by CCW member states over autonomous weaponry.

  • Essential components of human control include:

    • Predictability and reliability of weapon systems.

    • Human involvement during development, activation, and operation phases.

    • Knowledge of how the weapon functions amid changing conditions.

  • Stages of control include:

    1. Development Stage: Ensure compliance through technical design and testing.

    2. Activation Stage: Commanders determine context and environmental conditions for weapon engagement.

    3. Operation Stage: Ability for human supervision and intervention during autonomous target selections and attacks.

VII. Predictability in Weapon Functioning and IHL Compliance

  • Predictability is core to ensuring legal compliance and minimizing violations.

  • Predictable systems are easier to control; higher unpredictability correlates with increased risk of IHL breaches.

  • Complexity in systems can lead to greater unpredictability, especially in hostile conditions.

VIII. Accountability for Violations

  • Autonomous systems may create accountability challenges in IHL violations.

  • States remain liable for actions taken by their armed forces using such systems.

  • Responsibility may be ambiguous at various stages of the autonomous weapons’ lifecycle, making it challenging to identify liable individuals.

  • Clear accountability exists for those who intentionally program systems for IHL violations or recklessly deploy them.

IX. Conclusion

  • Ensuring IHL compliance requires retaining human control within the operation of autonomous weapon systems.

  • Future assessments should determine the necessary levels of human control not only for legal compliance but also to meet ethical standards.