AI Summits Need to Take Safety Seriously Again


AI Summits Need to Take Safety Seriously Again
The issue
This campaign has ended. If you already signed, thank you! If not, feel free to sign to express support for meaningful safety commitments at future summits.
AI has enhanced human productivity and advanced medicine, yet the rapid progress carries risks which transcend all national borders. Warnings from leading experts are clear: only through international cooperation can the world address dangers ranging from misuse to loss of control. [1, 2]
At the first AI Safety Summit (Bletchley 2023), the declaration warned of AI's "potential for serious, even catastrophic, harm" and signatories resolved to work together to ensure AI remains safe.
But at the AI Action Summit (Paris 2025), safety was sidelined by economic interests. Professor Max Tegmark of MIT said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster."
Now the AI Impact Summit (Delhi, February 19-20, 2026) is approaching, with even more attending CEOs than in Paris.
"Safety" to "Action" to "Impact" — the trend looks grim. Yet 100+ UK parliamentarians support binding regulations on the most powerful AI systems, and twelve Nobel laureates and hundreds of experts call for "governments to reach an international agreement on red lines for AI - ensuring they are operational, with robust enforcement mechanisms - by the end of 2026."
The Delhi summit is a critical moment to heed their call. Together we can reverse course!
Our Demand
AI summits need to take safety seriously again. We call on Australia's delegation to publicly advocate for the following priorities at the upcoming AI Impact Summit:
- Binding safety standards. Voluntary self-regulation does not create real safety. We need independent rules that are verifiable and enforceable.
- Red lines. AI needs clear boundaries: Which risks are unacceptable? When must development be halted? Such limits can only be enforced internationally.
Technological progress is not an end in itself, but needs to be shaped responsibly in the interest of humanity.
Every signature demonstrates: The future of AI must not be decided by corporations alone. The vast majority of the population wants more protection, and we want real democratic participation!
What Are Red Lines for AI?
Red lines prohibit AI systems that pose an unacceptable risk to all of us, including systems that:
- undergo uncontrolled self-improvement,
- systematically deceive humans,
- or enable catastrophic misuse such as developing bioweapons.
How Could an International Agreement Be Enforced?
Training advanced AI systems currently requires massive data centers with specialized chips. This concentration makes regulation highly feasible.
An international agreement could rest on three pillars:
- International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
- Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
- Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.
What Experts Say
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Statement on AI Risk, signed by the 3 most cited AI scientists and the CEOs of leading AI companies
"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." - Steven Adler, former OpenAI Dangerous Capability Evaluations Lead
"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." - Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao. International Dialogues on AI Safety, Shanghai 2025
About Me
Michael and Peter volunteer as co-directors of PauseAI Australia. Michael kickstarted the effort with the Melbourne protest of the Paris AI Summit’s lack of safety commitments. Peter got inspired and founded the Canberra chapter, gradually getting more involved. Together they (and maybe you!) seek for Australia to get the big AI players talking on sensible safety measures.
This petition is supported by PauseAI Australia, part of an international grassroots movement advocating for a pause in the development of the most dangerous AI systems until robust safety measures and democratic participation are established. Website: pauseai.au

3,887
The issue
This campaign has ended. If you already signed, thank you! If not, feel free to sign to express support for meaningful safety commitments at future summits.
AI has enhanced human productivity and advanced medicine, yet the rapid progress carries risks which transcend all national borders. Warnings from leading experts are clear: only through international cooperation can the world address dangers ranging from misuse to loss of control. [1, 2]
At the first AI Safety Summit (Bletchley 2023), the declaration warned of AI's "potential for serious, even catastrophic, harm" and signatories resolved to work together to ensure AI remains safe.
But at the AI Action Summit (Paris 2025), safety was sidelined by economic interests. Professor Max Tegmark of MIT said "it almost felt like they were trying to undo Bletchley," calling the declaration's omission of safety "a recipe for disaster."
Now the AI Impact Summit (Delhi, February 19-20, 2026) is approaching, with even more attending CEOs than in Paris.
"Safety" to "Action" to "Impact" — the trend looks grim. Yet 100+ UK parliamentarians support binding regulations on the most powerful AI systems, and twelve Nobel laureates and hundreds of experts call for "governments to reach an international agreement on red lines for AI - ensuring they are operational, with robust enforcement mechanisms - by the end of 2026."
The Delhi summit is a critical moment to heed their call. Together we can reverse course!
Our Demand
AI summits need to take safety seriously again. We call on Australia's delegation to publicly advocate for the following priorities at the upcoming AI Impact Summit:
- Binding safety standards. Voluntary self-regulation does not create real safety. We need independent rules that are verifiable and enforceable.
- Red lines. AI needs clear boundaries: Which risks are unacceptable? When must development be halted? Such limits can only be enforced internationally.
Technological progress is not an end in itself, but needs to be shaped responsibly in the interest of humanity.
Every signature demonstrates: The future of AI must not be decided by corporations alone. The vast majority of the population wants more protection, and we want real democratic participation!
What Are Red Lines for AI?
Red lines prohibit AI systems that pose an unacceptable risk to all of us, including systems that:
- undergo uncontrolled self-improvement,
- systematically deceive humans,
- or enable catastrophic misuse such as developing bioweapons.
How Could an International Agreement Be Enforced?
Training advanced AI systems currently requires massive data centers with specialized chips. This concentration makes regulation highly feasible.
An international agreement could rest on three pillars:
- International oversight: A new international body verifies compliance with safety standards in cooperation with national authorities. Whistleblowers are protected.
- Compute transparency: Large training runs are registered and supervised. AI chips can be designed so their use is traceable.
- Consequences for violations: States agree on joint sanctions against actors who cross red lines and develop strategies for crisis response.
What Experts Say
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Statement on AI Risk, signed by the 3 most cited AI scientists and the CEOs of leading AI companies
"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems." - Steven Adler, former OpenAI Dangerous Capability Evaluations Lead
"We emphasise: some AI systems today already demonstrate the capability and propensity to undermine their creators' safety and control efforts." - Consensus of leading AI safety researchers, including Stuart Russell and Andrew Yao. International Dialogues on AI Safety, Shanghai 2025
About Me
Michael and Peter volunteer as co-directors of PauseAI Australia. Michael kickstarted the effort with the Melbourne protest of the Paris AI Summit’s lack of safety commitments. Peter got inspired and founded the Canberra chapter, gradually getting more involved. Together they (and maybe you!) seek for Australia to get the big AI players talking on sensible safety measures.
This petition is supported by PauseAI Australia, part of an international grassroots movement advocating for a pause in the development of the most dangerous AI systems until robust safety measures and democratic participation are established. Website: pauseai.au

3,887
The Decision Makers

Supporter voices
Share this petition
Petition created on 31 January 2026