The UK Must Lead on AI Safety at Global Summits - Not Retreat

Aktuelle Unterzeichner*innen:
Tabitha Machin und 19 andere Personen haben kürzlich unterschrieben.

Das Problem

Please empower a friend to make their voice heard too by sharing this petition with them personally. Every signature makes a real difference.


The problem

Advanced AI is being developed at unprecedented speed. The risks cross national borders: misuse, loss of control, catastrophic accidents. Only international cooperation can address them. [1, 2]

At the 2023 AI Safety Summit (Bletchley Park), world leaders warned of AI's "potential for serious, even catastrophic, harm."[3] 



By the Paris 2025 AI Action Summit, safety was sidelined. MIT's Max Tegmark said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster." [4, 5]

It's now 2026, and the AI Impact Summit in Delhi (19-20 February) is expected to host more CEOs than even the Paris Summit, highlighting the focus on commercial acceleration and obscuring safety entirely.

"Safety" to "Action" to "Impact": the progression in summit names is telling. Economic interests are displacing safety concerns at the very summits meant to address them.

We call on the UK Government to reverse the trend, starting at the Delhi Summit, by publicly advocating for binding international governance for advanced AI.

Many leaders and experts already agree with this:

  • Over 100 UK parliamentarians support binding regulations on the most powerful AI systems.
  • 12 Nobel laureates and hundreds of experts have already called for "governments to reach an international agreement on red lines for AI [...] by the end of 2026." [6,7]

The Delhi summit is a critical moment to heed their call. AI needs clear boundaries, or "red lines": which risks are unacceptable? When must development be halted? Such limits can only be enforced through robust global coordination.

 
Our demand

We call on the UK Government to advocate for binding international red lines: clear, verifiable and enforceable limits on unacceptable AI risks. Not voluntary self-regulation. 

Red lines would prohibit AI systems that pose intolerable risks, including systems that:

  • undergo uncontrolled self-improvement,
  • deceive humans,
  • enable catastrophic misuse such as developing bioweapons.

The public agrees: 87% of UK citizens support requiring AI developers to prove their systems are safe before release; 74% believe the government should prevent superhuman AI from being created soon. [8]

Every signature amplifies this message: democracy demands that the future of AI be not decided by corporations alone.

 
How could an international agreement be enforced?

Training advanced AI systems currently requires massive data centres with specialised chips. This concentration makes regulation feasible.

An international agreement could rest on three pillars:

  1. International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
  2. Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
  3. Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.

The blocker is political will from the top AI policymakers. This needs to change.
 
What experts say

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Statement on AI Risk, signed by the three most-cited AI scientists and the CEOs of leading AI companies

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." - Steven Adler, former OpenAI Dangerous Capability Evaluations Lead

"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." - Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao,  International Dialogues on AI Safety, Shanghai 2025

 
About us

This petition is supported by PauseAI, an international grassroots movement advocating for greater safety and democratic control of powerful AI systems.
To find out more about our activities, see our local groups and events, or join us:

 

References

  1. https://aistatement.com/
  2. https://superintelligence-statement.org/
  3. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 
  4. https://time.com/7221384/ai-regulation-takes-backseat-paris-summit/
  5. https://fortune.com/2025/02/11/paris-ai-action-summit-ai-safety-sidelined-economic-opportunity-promoted/
  6. https://controlai.com/statement
  7. https://red-lines.ai/
  8. https://pauseai.info/polls-and-surveys

 

[international link]

avatar of the starter
PauseAI GlobalPetitionsstarter*in

3.887

Aktuelle Unterzeichner*innen:
Tabitha Machin und 19 andere Personen haben kürzlich unterschrieben.

Das Problem

Please empower a friend to make their voice heard too by sharing this petition with them personally. Every signature makes a real difference.


The problem

Advanced AI is being developed at unprecedented speed. The risks cross national borders: misuse, loss of control, catastrophic accidents. Only international cooperation can address them. [1, 2]

At the 2023 AI Safety Summit (Bletchley Park), world leaders warned of AI's "potential for serious, even catastrophic, harm."[3] 



By the Paris 2025 AI Action Summit, safety was sidelined. MIT's Max Tegmark said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster." [4, 5]

It's now 2026, and the AI Impact Summit in Delhi (19-20 February) is expected to host more CEOs than even the Paris Summit, highlighting the focus on commercial acceleration and obscuring safety entirely.

"Safety" to "Action" to "Impact": the progression in summit names is telling. Economic interests are displacing safety concerns at the very summits meant to address them.

We call on the UK Government to reverse the trend, starting at the Delhi Summit, by publicly advocating for binding international governance for advanced AI.

Many leaders and experts already agree with this:

  • Over 100 UK parliamentarians support binding regulations on the most powerful AI systems.
  • 12 Nobel laureates and hundreds of experts have already called for "governments to reach an international agreement on red lines for AI [...] by the end of 2026." [6,7]

The Delhi summit is a critical moment to heed their call. AI needs clear boundaries, or "red lines": which risks are unacceptable? When must development be halted? Such limits can only be enforced through robust global coordination.

 
Our demand

We call on the UK Government to advocate for binding international red lines: clear, verifiable and enforceable limits on unacceptable AI risks. Not voluntary self-regulation. 

Red lines would prohibit AI systems that pose intolerable risks, including systems that:

  • undergo uncontrolled self-improvement,
  • deceive humans,
  • enable catastrophic misuse such as developing bioweapons.

The public agrees: 87% of UK citizens support requiring AI developers to prove their systems are safe before release; 74% believe the government should prevent superhuman AI from being created soon. [8]

Every signature amplifies this message: democracy demands that the future of AI be not decided by corporations alone.

 
How could an international agreement be enforced?

Training advanced AI systems currently requires massive data centres with specialised chips. This concentration makes regulation feasible.

An international agreement could rest on three pillars:

  1. International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
  2. Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
  3. Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.

The blocker is political will from the top AI policymakers. This needs to change.
 
What experts say

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Statement on AI Risk, signed by the three most-cited AI scientists and the CEOs of leading AI companies

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." - Steven Adler, former OpenAI Dangerous Capability Evaluations Lead

"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." - Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao,  International Dialogues on AI Safety, Shanghai 2025

 
About us

This petition is supported by PauseAI, an international grassroots movement advocating for greater safety and democratic control of powerful AI systems.
To find out more about our activities, see our local groups and events, or join us:

 

References

  1. https://aistatement.com/
  2. https://superintelligence-statement.org/
  3. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 
  4. https://time.com/7221384/ai-regulation-takes-backseat-paris-summit/
  5. https://fortune.com/2025/02/11/paris-ai-action-summit-ai-safety-sidelined-economic-opportunity-promoted/
  6. https://controlai.com/statement
  7. https://red-lines.ai/
  8. https://pauseai.info/polls-and-surveys

 

[international link]

avatar of the starter
PauseAI GlobalPetitionsstarter*in
Jetzt unterstützen

3.887


Neuigkeiten zur Petition

Diese Petition teilen

Petition am 4. Februar 2026 erstellt