Ban AI on Triggering Weapon Systems

The Issue

What is AI triggering?

It is a technology that enables artificial intelligence to independently make decisions about the use of lethal weapons. AI triggering contradicts the fundamental principles of humanism and the customs of war, and threatens to exponentially increase the amount of violence and human suffering. AI triggering should be banned at the level of an international convention.

What are the problems with AI triggering?

Machines Can Become Uncontrollable
Complex algorithms, even when thoroughly tested, may encounter unforeseen situations where their behavior becomes unpredictable. For example:

  1. Errors or Malfunctions: A coding error, sensor failure, or misinterpretation of the environment could lead the AI to attack at the wrong time or target the wrong entity. Such incidents have occurred in less critical systems, like autonomous vehicles, where glitches caused accidents. With weapons, the consequences could be catastrophic, including mass casualties or conflict escalation.
    External Interference: AI systems are vulnerable to cyberattacks, where hackers could seize control, alter objectives, or disable safety mechanisms. Even without malicious intent, excessive autonomy could lead to unintended actions if developers fail to account for all scenarios.
  2. Misclassification Risks: An AI programmed to neutralize threats might mistakenly identify civilians as enemies due to flawed data or recognition algorithms. Without human oversight, these systems could act against their creators' intentions, making them dangerously unpredictable.

Machines Cannot Be Held Accountable
AI systems that deploy weapons cannot be held responsible for their actions, creating significant ethical and legal challenges:

  1. Blur of Responsibility: Unlike humans, who can be prosecuted for war crimes or errors, machines are tools. Determining who is liable—programmers, algorithm designers, or commanding officers—is complex in highly autonomous systems. This ambiguity can lead to impunity in cases of tragic mistakes, eroding trust in justice systems.
  2. Moral Disconnect: Human soldiers, aware of the consequences of their actions, may be deterred from excessive cruelty by guilt or fear of punishment. AI, lacking emotions or moral reasoning, operates solely based on programmed parameters. This could result in actions a human might deem immoral or illegal, such as striking densely populated areas if it aligns with the algorithm, even when a human operator might hesitate.

Machines Are Incapable of Mercy
AI lacks empathy and humanity, making it unsuitable for decisions requiring compassion or restraint:

  1. Inability to Contextualize: In conflicts, humans may choose not to attack based on context, such as an enemy’s surrender, the presence of civilians, or opportunities for negotiation. AI, bound by algorithms, cannot account for such nuances unless explicitly programmed, potentially leading to excessive cruelty or indiscriminate attacks.
  2. Conflict Escalation: Human soldiers, guided by morality or fear of retaliation, may exercise restraint to avoid unnecessary casualties or preserve diplomatic options. AI, acting strictly within its programming, may exhibit excessive aggression, ignoring signs of capitulation or peace. For example, an AI might continue attacking despite an opponent’s surrender if its algorithm fails to recognize such signals, increasing casualties and undermining trust between conflicting parties, thus hindering peace efforts.

Solution?

International Convention on the Prohibition of AI Triggering Technologies. Ban on AI triggering is essential to:

  • Protect human lives and prevent unintended escalations of violence.
  • Uphold ethical standards in warfare and ensure accountability.
  • Safeguard humanity’s control over lethal technologies.

Help make it a reality and sign the petition.

avatar of the starter
Timofey VPetition Starter

17

The Issue

What is AI triggering?

It is a technology that enables artificial intelligence to independently make decisions about the use of lethal weapons. AI triggering contradicts the fundamental principles of humanism and the customs of war, and threatens to exponentially increase the amount of violence and human suffering. AI triggering should be banned at the level of an international convention.

What are the problems with AI triggering?

Machines Can Become Uncontrollable
Complex algorithms, even when thoroughly tested, may encounter unforeseen situations where their behavior becomes unpredictable. For example:

  1. Errors or Malfunctions: A coding error, sensor failure, or misinterpretation of the environment could lead the AI to attack at the wrong time or target the wrong entity. Such incidents have occurred in less critical systems, like autonomous vehicles, where glitches caused accidents. With weapons, the consequences could be catastrophic, including mass casualties or conflict escalation.
    External Interference: AI systems are vulnerable to cyberattacks, where hackers could seize control, alter objectives, or disable safety mechanisms. Even without malicious intent, excessive autonomy could lead to unintended actions if developers fail to account for all scenarios.
  2. Misclassification Risks: An AI programmed to neutralize threats might mistakenly identify civilians as enemies due to flawed data or recognition algorithms. Without human oversight, these systems could act against their creators' intentions, making them dangerously unpredictable.

Machines Cannot Be Held Accountable
AI systems that deploy weapons cannot be held responsible for their actions, creating significant ethical and legal challenges:

  1. Blur of Responsibility: Unlike humans, who can be prosecuted for war crimes or errors, machines are tools. Determining who is liable—programmers, algorithm designers, or commanding officers—is complex in highly autonomous systems. This ambiguity can lead to impunity in cases of tragic mistakes, eroding trust in justice systems.
  2. Moral Disconnect: Human soldiers, aware of the consequences of their actions, may be deterred from excessive cruelty by guilt or fear of punishment. AI, lacking emotions or moral reasoning, operates solely based on programmed parameters. This could result in actions a human might deem immoral or illegal, such as striking densely populated areas if it aligns with the algorithm, even when a human operator might hesitate.

Machines Are Incapable of Mercy
AI lacks empathy and humanity, making it unsuitable for decisions requiring compassion or restraint:

  1. Inability to Contextualize: In conflicts, humans may choose not to attack based on context, such as an enemy’s surrender, the presence of civilians, or opportunities for negotiation. AI, bound by algorithms, cannot account for such nuances unless explicitly programmed, potentially leading to excessive cruelty or indiscriminate attacks.
  2. Conflict Escalation: Human soldiers, guided by morality or fear of retaliation, may exercise restraint to avoid unnecessary casualties or preserve diplomatic options. AI, acting strictly within its programming, may exhibit excessive aggression, ignoring signs of capitulation or peace. For example, an AI might continue attacking despite an opponent’s surrender if its algorithm fails to recognize such signals, increasing casualties and undermining trust between conflicting parties, thus hindering peace efforts.

Solution?

International Convention on the Prohibition of AI Triggering Technologies. Ban on AI triggering is essential to:

  • Protect human lives and prevent unintended escalations of violence.
  • Uphold ethical standards in warfare and ensure accountability.
  • Safeguard humanity’s control over lethal technologies.

Help make it a reality and sign the petition.

avatar of the starter
Timofey VPetition Starter
Support now

17


The Decision Makers

Supporter Voices

Petition updates
Share this petition
Petition created on June 15, 2025