AI Summits Need to Take Safety Seriously Again

The Issue

[international link]

AI has enhanced human productivity and advanced medicine, yet the rapid progress carries risks which transcend all national borders. Warnings from leading experts are clear: only through international cooperation can the world address dangers ranging from misuse to loss of control. [1, 2]

At the first AI Safety Summit (Bletchley 2023), the declaration warned of AI's "potential for serious, even catastrophic, harm" and signatories resolved to work together to ensure AI remains safe.

But at the AI Action Summit (Paris 2025), safety was sidelined by economic interests. Professor Max Tegmark of MIT said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster."

Now the AI Impact Summit (Delhi, February 19-20, 2026) is approaching, with even more attending CEOs than in Paris.

"Safety" to "Action" to "Impact" — the trend looks grim. Yet 100+ UK parliamentarians support binding regulations on the most powerful AI systems, and twelve Nobel laureates and hundreds of experts call for "governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026."

The Delhi summit is a critical moment to heed their call. Together we can reverse course!

Our Demand
AI summits need to take safety seriously again. We call on Canada’s delegation to publicly advocate for the following priorities at the upcoming AI Impact Summit:

  1. Binding safety standards. Voluntary self-regulation does not create real safety. We need independent rules that are verifiable and enforceable.
  2. Red lines. AI needs clear boundaries: Which risks are unacceptable? When must development be halted? Such limits can only be enforced internationally.

Technological progress is not an end in itself, but needs to be shaped responsibly in the interest of humanity.

Every signature demonstrates: The future of AI must not be decided by corporations alone. The vast majority of the population wants more protection, and we want real democratic participation!

What Are Red Lines for AI?
Red lines prohibit AI systems that pose an unacceptable risk to all of us, including systems that:

  • undergo uncontrolled self-improvement,
  • systematically deceive humans,
  • or enable catastrophic misuse such as developing bioweapons.

How Could an International Agreement Be Enforced?
Training advanced AI systems currently requires massive data centers with specialized chips. This concentration makes regulation highly feasible.

An international agreement could rest on three pillars:

  1. International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
  2. Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
  3. Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.

What Experts Say
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." — Statement on AI Risk, signed by the 3 most cited AI scientists and the CEOs of leading AI companies

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." — Steven Adler, former OpenAI Dangerous Capability Evaluations Lead

"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." — Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao. International Dialogues on AI Safety, Shanghai 2025

About Us
This petition is supported by the Montreal community of PauseAI, an international grassroots movement advocating for greater safety and democratic control of powerful AI systems. Website: https://pauseai.ca/en/montreal.html

 

3,887

The Issue

[international link]

AI has enhanced human productivity and advanced medicine, yet the rapid progress carries risks which transcend all national borders. Warnings from leading experts are clear: only through international cooperation can the world address dangers ranging from misuse to loss of control. [1, 2]

At the first AI Safety Summit (Bletchley 2023), the declaration warned of AI's "potential for serious, even catastrophic, harm" and signatories resolved to work together to ensure AI remains safe.

But at the AI Action Summit (Paris 2025), safety was sidelined by economic interests. Professor Max Tegmark of MIT said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster."

Now the AI Impact Summit (Delhi, February 19-20, 2026) is approaching, with even more attending CEOs than in Paris.

"Safety" to "Action" to "Impact" — the trend looks grim. Yet 100+ UK parliamentarians support binding regulations on the most powerful AI systems, and twelve Nobel laureates and hundreds of experts call for "governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026."

The Delhi summit is a critical moment to heed their call. Together we can reverse course!

Our Demand
AI summits need to take safety seriously again. We call on Canada’s delegation to publicly advocate for the following priorities at the upcoming AI Impact Summit:

  1. Binding safety standards. Voluntary self-regulation does not create real safety. We need independent rules that are verifiable and enforceable.
  2. Red lines. AI needs clear boundaries: Which risks are unacceptable? When must development be halted? Such limits can only be enforced internationally.

Technological progress is not an end in itself, but needs to be shaped responsibly in the interest of humanity.

Every signature demonstrates: The future of AI must not be decided by corporations alone. The vast majority of the population wants more protection, and we want real democratic participation!

What Are Red Lines for AI?
Red lines prohibit AI systems that pose an unacceptable risk to all of us, including systems that:

  • undergo uncontrolled self-improvement,
  • systematically deceive humans,
  • or enable catastrophic misuse such as developing bioweapons.

How Could an International Agreement Be Enforced?
Training advanced AI systems currently requires massive data centers with specialized chips. This concentration makes regulation highly feasible.

An international agreement could rest on three pillars:

  1. International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
  2. Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
  3. Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.

What Experts Say
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." — Statement on AI Risk, signed by the 3 most cited AI scientists and the CEOs of leading AI companies

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." — Steven Adler, former OpenAI Dangerous Capability Evaluations Lead

"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." — Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao. International Dialogues on AI Safety, Shanghai 2025

About Us
This petition is supported by the Montreal community of PauseAI, an international grassroots movement advocating for greater safety and democratic control of powerful AI systems. Website: https://pauseai.ca/en/montreal.html

 

Support now

3,887


The Decision Makers

Evan Solomon
Evan Solomon
Canada’s Minister of Artificial Intelligence and Digital Innovation
Petition updates

Share this petition

Petition created on February 1, 2026