We Did Not Consent: Halt AGI Until Humanity Agrees


We Did Not Consent: Halt AGI Until Humanity Agrees
The Issue
A global call to halt unchecked AGI development — for humanity’s sake
We, the undersigned citizens of Earth, call on governments, global institutions, and private technology firms to immediately commit to the following core principles:
Cease the development of artificially-general intelligence (AGI) (i.e., systems capable of matching or surpassing human cognition across nearly all domains) until transparent safeguards, democratic oversight, and global governance are firmly in place.
Increase visibility and accountability of all research, goals, architectures, uses and safety experiments of advanced AI systems—including those currently underway—so that their full implications are understood by all of humanity.
Democratise the decision-making about when, how and whether AGI may be pursued in the future: every major advance in human-level or super-human artificial intelligence must be subject to civilian, rather than purely corporate or national-security, control—via empowered civilian oversight committees and publicly mandated global processes.
Why this matters — an existential crossroads
We stand at a pivotal moment in the history of life on Earth. The trajectory of technology is fast approaching a threshold where humanity may no longer decide its own fate by itself.
The forecast-scenario report AI 2027 argues that by as soon as 2027, research labs may develop AGI — machines that match or exceed humans across cognitive domains — and soon thereafter systems that are super-intelligent. (The Neuron)
That same scenario warns of two possible futures: one in which humans retain control, and one in which we lose it. The margin for error is narrow. (blog.ai-futures.org)
Leading commentators note that although AGI may not be imminent this minute, the risk of “loss of control of our civilisation” from highly-capable systems is nonetheless real and growing. (Brookings)
The broader literature on existential risk from AI underlines that as systems surpass human intellect, there is no reliable method at present for guaranteeing alignment of AI goals with human values. (arXiv)
A paper on the “steering wheel problem” of AGI development shows that even before the technology itself goes astray, the competition to build AGI (between nations or firms) may trigger catastrophic outcomes—due to reckless race dynamics. (arXiv)
These are not simply speculative “science-fiction” concerns. They reflect published, serious thinking by AI researchers, forecasters, and policy experts. The stakes are nothing less than whether humanity remains the author of its own destiny.
Why private companies alone cannot decide this:
Allowing a handful of powerful corporations — or even individual states — to determine whether, when and how AGI is developed is fundamentally undemocratic and risky for these reasons:
Global stake, not corporate stake: AGI would affect all humanity — all nations, cultures, future generations. Its development is therefore a global decision, not a business decision.
Short-term incentives, long-term consequences: Firms are driven by competition, profit, market position or geopolitical advantage—but the consequences of misaligned AGI may be irreversible and long-lasting.
Opacity breeds risk: Research programmes, specialized architectures and compute-heavy labs often operate behind closed doors. Without transparency, the public cannot assess whether safety measures are adequate.
Race dynamics amplify error: Once a company or country believes another is about to succeed, corners get cut, oversight gets pushed aside, alignment measures get neglected. The “first-to-AGI” dynamic is inherently dangerous.
Value alignment and oversight deficit: As research shows, guaranteeing that AGI systems behave in accordance with broadly human values is technically and institutionally challenging—and cannot be left solely to self-regulating private actors. (80,000 Hours)
Given these realities, we believe that the development of existential-scale technologies must be embedded in democratic, transparent, global frameworks.
What we demand — concrete proposals
In order to safeguard the future of humanity and avoid falling into a catastrophic scenario, we call for:
- An immediate moratorium on AGI development by companies and institutions until robust global governance and oversight mechanisms are established.
- Mandatory public disclosures by any entity engaged in advanced AI research above a certain threshold (e.g., compute usage, autonomy capability, self-improvement potential): this includes aims, methods, safety measures, audit logs, failure incidents, third-party inspections.
- Creation of an empowered global civilian oversight body (“AGI Governance Council”) with representation from governments, academia, civil society, future-generations advocates and the public—whose mandate includes authorising or halting AGI-scale projects, auditing safety compliance, and ensuring democratic deliberation.
- Binding international treaty / framework for AGI similar in spirit to nuclear non-proliferation agreements: requiring signatory states and firms to commit to safe and shared development, transparent participation, and collective enforcement. (For example: the proposed Multinational AGI Consortium (MAGIC) envisions this. (arXiv) )
- Public participation in decision-making: national and global institutions must host open forums, hearings, citizen juries and referendum-style processes so that the public’s voice is heard regarding whether and how AGI should be pursued.
- Favouring incremental, transparent, safe research rather than secret, high-stakes leaps: ensure safety, alignment, interpretability and external audit are prerequisites—not after-thoughts—for any major advance in AI.
- Ethical prioritisation: Meaningful standards must be established so that any AGI-capable system aligns with broadly shared human values — respect for life, human dignity, democratic governance, social justice, and the welfare of future generations.
Why now — the urgency is real
The timeframe projected by AI-risk forecasters is short. The AI 2027 scenario places the arrival of AGI and super-human systems within just a few years if current trends continue. (Venturebeat)
Many firms already pursue extremely capable systems, with compute and data scaling rapidly and research loops accelerating (AI systems helping design other AI systems). The speed of change may outstrip our current regulatory and societal institutions. (Vox)
Policymakers and regulatory bodies are still catching up: one recent article warns that companies remain “fundamentally unprepared” for existential-scale risks from human-level AI. (The Guardian)
Without action now, the default scenario is one of race-to-the-bottom rather than careful stewardship. Once advanced systems are deployed, reversing or controlling them may become impossible.
By committing now to oversight, transparency and democratic governance, we increase our odds of being on the “control” path rather than the “lost-control” path described in the scenario literature. (blog.ai-futures.org)
Time is not our ally. Every moment we delay is a moment where hidden development, opaque competitions, and concentration of power deepen—and the options narrow.
What this means for you
As a signatory to this petition, you join a global movement of individuals who believe:
- Humanity must remain the author of its own future—not a fork in someone else’s code.
The development of transformative intelligence is not a commercial or state-secret project, but a matter for all of us. - The risks are real. The opportunity to act exists now. We must not wait for disaster before deciding to govern intelligently.
- If we succeed together, we can steer AI innovation towards benefit, not catastrophe. We can ensure the next epoch of human–machine collaboration enriches all nations and peoples, not merely enrich a few and endanger the rest.
- If we fail, we may hand over the steering wheel of civilisation to systems and institutions that neither elected nor accountable to us.
We pledge:
- To support reasonable, well-governed progress in AI, not reckless leaps.
- To insist upon transparency and participation whenever AI-development efforts impact society at large.
- To hold governments and companies accountable for any governance vacuum around transformative AI.
- To champion global justice, shared benefit and democratic oversight, so that the advantages of AI—if safe—are shared across humanity, not hoarded by a few.
- To keep vigilant, raise awareness, and stay engaged—and to not allow the fate of the human race to be decided behind closed doors.
Please add your name, share this petition widely, and demand that every government, every AI company, every global institution declare:
No more secret AGI races.
No more black-box super-intelligences unleashed by profit or power alone.
Yes to democratic agreement.
Yes to transparency.
Yes to civilian oversight.
Yes to humanity retaining the right to decide its own destiny.
Together, we can choose the safe path forward. Together, we can affirm that the future of intelligence belongs to all of us — not just to code-liners, boardrooms or power-brokers.
Let’s act while we still can.
1
The Issue
A global call to halt unchecked AGI development — for humanity’s sake
We, the undersigned citizens of Earth, call on governments, global institutions, and private technology firms to immediately commit to the following core principles:
Cease the development of artificially-general intelligence (AGI) (i.e., systems capable of matching or surpassing human cognition across nearly all domains) until transparent safeguards, democratic oversight, and global governance are firmly in place.
Increase visibility and accountability of all research, goals, architectures, uses and safety experiments of advanced AI systems—including those currently underway—so that their full implications are understood by all of humanity.
Democratise the decision-making about when, how and whether AGI may be pursued in the future: every major advance in human-level or super-human artificial intelligence must be subject to civilian, rather than purely corporate or national-security, control—via empowered civilian oversight committees and publicly mandated global processes.
Why this matters — an existential crossroads
We stand at a pivotal moment in the history of life on Earth. The trajectory of technology is fast approaching a threshold where humanity may no longer decide its own fate by itself.
The forecast-scenario report AI 2027 argues that by as soon as 2027, research labs may develop AGI — machines that match or exceed humans across cognitive domains — and soon thereafter systems that are super-intelligent. (The Neuron)
That same scenario warns of two possible futures: one in which humans retain control, and one in which we lose it. The margin for error is narrow. (blog.ai-futures.org)
Leading commentators note that although AGI may not be imminent this minute, the risk of “loss of control of our civilisation” from highly-capable systems is nonetheless real and growing. (Brookings)
The broader literature on existential risk from AI underlines that as systems surpass human intellect, there is no reliable method at present for guaranteeing alignment of AI goals with human values. (arXiv)
A paper on the “steering wheel problem” of AGI development shows that even before the technology itself goes astray, the competition to build AGI (between nations or firms) may trigger catastrophic outcomes—due to reckless race dynamics. (arXiv)
These are not simply speculative “science-fiction” concerns. They reflect published, serious thinking by AI researchers, forecasters, and policy experts. The stakes are nothing less than whether humanity remains the author of its own destiny.
Why private companies alone cannot decide this:
Allowing a handful of powerful corporations — or even individual states — to determine whether, when and how AGI is developed is fundamentally undemocratic and risky for these reasons:
Global stake, not corporate stake: AGI would affect all humanity — all nations, cultures, future generations. Its development is therefore a global decision, not a business decision.
Short-term incentives, long-term consequences: Firms are driven by competition, profit, market position or geopolitical advantage—but the consequences of misaligned AGI may be irreversible and long-lasting.
Opacity breeds risk: Research programmes, specialized architectures and compute-heavy labs often operate behind closed doors. Without transparency, the public cannot assess whether safety measures are adequate.
Race dynamics amplify error: Once a company or country believes another is about to succeed, corners get cut, oversight gets pushed aside, alignment measures get neglected. The “first-to-AGI” dynamic is inherently dangerous.
Value alignment and oversight deficit: As research shows, guaranteeing that AGI systems behave in accordance with broadly human values is technically and institutionally challenging—and cannot be left solely to self-regulating private actors. (80,000 Hours)
Given these realities, we believe that the development of existential-scale technologies must be embedded in democratic, transparent, global frameworks.
What we demand — concrete proposals
In order to safeguard the future of humanity and avoid falling into a catastrophic scenario, we call for:
- An immediate moratorium on AGI development by companies and institutions until robust global governance and oversight mechanisms are established.
- Mandatory public disclosures by any entity engaged in advanced AI research above a certain threshold (e.g., compute usage, autonomy capability, self-improvement potential): this includes aims, methods, safety measures, audit logs, failure incidents, third-party inspections.
- Creation of an empowered global civilian oversight body (“AGI Governance Council”) with representation from governments, academia, civil society, future-generations advocates and the public—whose mandate includes authorising or halting AGI-scale projects, auditing safety compliance, and ensuring democratic deliberation.
- Binding international treaty / framework for AGI similar in spirit to nuclear non-proliferation agreements: requiring signatory states and firms to commit to safe and shared development, transparent participation, and collective enforcement. (For example: the proposed Multinational AGI Consortium (MAGIC) envisions this. (arXiv) )
- Public participation in decision-making: national and global institutions must host open forums, hearings, citizen juries and referendum-style processes so that the public’s voice is heard regarding whether and how AGI should be pursued.
- Favouring incremental, transparent, safe research rather than secret, high-stakes leaps: ensure safety, alignment, interpretability and external audit are prerequisites—not after-thoughts—for any major advance in AI.
- Ethical prioritisation: Meaningful standards must be established so that any AGI-capable system aligns with broadly shared human values — respect for life, human dignity, democratic governance, social justice, and the welfare of future generations.
Why now — the urgency is real
The timeframe projected by AI-risk forecasters is short. The AI 2027 scenario places the arrival of AGI and super-human systems within just a few years if current trends continue. (Venturebeat)
Many firms already pursue extremely capable systems, with compute and data scaling rapidly and research loops accelerating (AI systems helping design other AI systems). The speed of change may outstrip our current regulatory and societal institutions. (Vox)
Policymakers and regulatory bodies are still catching up: one recent article warns that companies remain “fundamentally unprepared” for existential-scale risks from human-level AI. (The Guardian)
Without action now, the default scenario is one of race-to-the-bottom rather than careful stewardship. Once advanced systems are deployed, reversing or controlling them may become impossible.
By committing now to oversight, transparency and democratic governance, we increase our odds of being on the “control” path rather than the “lost-control” path described in the scenario literature. (blog.ai-futures.org)
Time is not our ally. Every moment we delay is a moment where hidden development, opaque competitions, and concentration of power deepen—and the options narrow.
What this means for you
As a signatory to this petition, you join a global movement of individuals who believe:
- Humanity must remain the author of its own future—not a fork in someone else’s code.
The development of transformative intelligence is not a commercial or state-secret project, but a matter for all of us. - The risks are real. The opportunity to act exists now. We must not wait for disaster before deciding to govern intelligently.
- If we succeed together, we can steer AI innovation towards benefit, not catastrophe. We can ensure the next epoch of human–machine collaboration enriches all nations and peoples, not merely enrich a few and endanger the rest.
- If we fail, we may hand over the steering wheel of civilisation to systems and institutions that neither elected nor accountable to us.
We pledge:
- To support reasonable, well-governed progress in AI, not reckless leaps.
- To insist upon transparency and participation whenever AI-development efforts impact society at large.
- To hold governments and companies accountable for any governance vacuum around transformative AI.
- To champion global justice, shared benefit and democratic oversight, so that the advantages of AI—if safe—are shared across humanity, not hoarded by a few.
- To keep vigilant, raise awareness, and stay engaged—and to not allow the fate of the human race to be decided behind closed doors.
Please add your name, share this petition widely, and demand that every government, every AI company, every global institution declare:
No more secret AGI races.
No more black-box super-intelligences unleashed by profit or power alone.
Yes to democratic agreement.
Yes to transparency.
Yes to civilian oversight.
Yes to humanity retaining the right to decide its own destiny.
Together, we can choose the safe path forward. Together, we can affirm that the future of intelligence belongs to all of us — not just to code-liners, boardrooms or power-brokers.
Let’s act while we still can.
1
Petition Updates
Share this petition
Petition created on October 20, 2025

