Davos Humanity-Centric AI Pledge (10 Commitments)

Recent signers:
Rouyu Wu and 19 others have signed recently.

The Issue

The mad race to get to AGI/ASI is real and frighteningly fast. But the prevailing bet—that once we “hit AGI,” systems will somehow self-align society—is naïve at best and reckless at worst. Look at the cost that we’re already paying with our internet, which is slowly dying with the rapidly accelerating AI slop. The public internet is being now swamped by 16+ billion AI-generated images (34 million being added daily) rending it unfit for developing the future models. 52% of the articles on the internet are written by AI and realistic looking AI generated videos are everywhere, drowning human signal and breaking the risk–reward model for creators. Almost 77% of the organizations are concerned with AI hallucinations and these increase in reasoning models. Next evolution of AI is agentic AI—systems that can act on their own. Lets take a step back and understand the true risks. Today’s models are astonishing intelligent and broken at the same time. They can design a chip, debug a driver, and summarise every paper on a topic – and then fail at basic causal reasoning or misread a simple real-world situation. Or misscount alphabets in simple words like straberry or garlic. This is jagged intelligence: extreme spikes of capability, deep valleys of brittleness. And we’re giving that jagged intelligence agency. “We are taking an intelligence that hallucinates, that overestimates its own certainty, and we’re giving it the keys to: your cloud account, your bank account, your factory, your city and eventually your life.” 

The risk isn’t a Hollywood apocalypse to be enjoyed with a popcorn. It’s something more mundane and real world dangerous: imagine millions of small, confident, slightly wrong actions rippling through logistics, infrastructure, finance, cities and our societies. While it’s easy to spin up millions of these agents almost at a flip of a switch it’s hard to control them reliably. The foundation models still hallucinate in a complex workflows and agents can turn rouge, can get corrupted or can be highjacked by malicious actors. Shipping autonomy on top of instability is like pouring concrete on the quicksand. A perfect receipe for disaster. And imagine that this Agentic AI will prove foundational for next evolution, the physical AI, allowing us to fill our precious earth with robots and machines which will work autonomously. Without adequate safety testing and validation of Agentic AI and Physical AI in real world and anchoring them with human in the loop, we are heading into a massive crisis almost blindfolded. Imagine an ecosystem of agents optimising local objectives in systems they barely understand. 

The jaggedness didn’t matter as much when the AI only produced text on a screen to be read by a human being. It becomes existentially important when this AI can act. And unlike the internet, we only get one Earth, a tiny pale blue dot. We need to make this transition safely and humanly responsible way by grounding this AI properly via this pledge. 

We, the undersigned, commit to building and deploying AI that advances human dignity, social trust, and shared prosperity—backed by measurable safeguards and accountability.

  1. Human dignity and agency first — AI must expand human choice, rights, and privacy.
  2. Safety before scale — We will test, red-team, monitor, and implement fail-safes before high-impact deployment.
  3. Accountability with remedies — A named human owner is responsible; harms will be traceable and remediated.
  4. Transparency in high-stakes use — Disclose AI involvement and enable appropriate explanations and independent evaluation.
  5. Consentful, lawful data practices — Respect consent and lawful basis; no covert extraction or exploitative sur veillance.
  6. Fairness and inclusion by design — Measure and reduce bias; ensure access across languages, regions, incomes, and abilities.
  7. Shared prosperity — Invest in reskilling and just transition; avoid models that externalize societal costs.
  8. Information integrity — Deter deception, manipulation, and misinformation; support provenance and responsible political use.
  9. Protect human identity in the agentic era — No impersonation or actions without explicit authorization; users can revoke permissions.
  10. Security and planetary responsibility — Prevent misuse and reduce environmental footprint through efficient, transparent operations.

Thanks you for your commitment for a safer and more fulfilling future for our kids and generations to come. 

 

avatar of the starter
Umakant SoniPetition StarterI have spent last 16 years in AI. Cofounder of pi Ventures, AI foundry and ARTPARK (AI &amp; Robotics Technology Park ). <a href="http://www.linkedin.com/in/soniumakant" rel="nofollow">www.linkedin.com/in/soniumakant</a>

38

Recent signers:
Rouyu Wu and 19 others have signed recently.

The Issue

The mad race to get to AGI/ASI is real and frighteningly fast. But the prevailing bet—that once we “hit AGI,” systems will somehow self-align society—is naïve at best and reckless at worst. Look at the cost that we’re already paying with our internet, which is slowly dying with the rapidly accelerating AI slop. The public internet is being now swamped by 16+ billion AI-generated images (34 million being added daily) rending it unfit for developing the future models. 52% of the articles on the internet are written by AI and realistic looking AI generated videos are everywhere, drowning human signal and breaking the risk–reward model for creators. Almost 77% of the organizations are concerned with AI hallucinations and these increase in reasoning models. Next evolution of AI is agentic AI—systems that can act on their own. Lets take a step back and understand the true risks. Today’s models are astonishing intelligent and broken at the same time. They can design a chip, debug a driver, and summarise every paper on a topic – and then fail at basic causal reasoning or misread a simple real-world situation. Or misscount alphabets in simple words like straberry or garlic. This is jagged intelligence: extreme spikes of capability, deep valleys of brittleness. And we’re giving that jagged intelligence agency. “We are taking an intelligence that hallucinates, that overestimates its own certainty, and we’re giving it the keys to: your cloud account, your bank account, your factory, your city and eventually your life.” 

The risk isn’t a Hollywood apocalypse to be enjoyed with a popcorn. It’s something more mundane and real world dangerous: imagine millions of small, confident, slightly wrong actions rippling through logistics, infrastructure, finance, cities and our societies. While it’s easy to spin up millions of these agents almost at a flip of a switch it’s hard to control them reliably. The foundation models still hallucinate in a complex workflows and agents can turn rouge, can get corrupted or can be highjacked by malicious actors. Shipping autonomy on top of instability is like pouring concrete on the quicksand. A perfect receipe for disaster. And imagine that this Agentic AI will prove foundational for next evolution, the physical AI, allowing us to fill our precious earth with robots and machines which will work autonomously. Without adequate safety testing and validation of Agentic AI and Physical AI in real world and anchoring them with human in the loop, we are heading into a massive crisis almost blindfolded. Imagine an ecosystem of agents optimising local objectives in systems they barely understand. 

The jaggedness didn’t matter as much when the AI only produced text on a screen to be read by a human being. It becomes existentially important when this AI can act. And unlike the internet, we only get one Earth, a tiny pale blue dot. We need to make this transition safely and humanly responsible way by grounding this AI properly via this pledge. 

We, the undersigned, commit to building and deploying AI that advances human dignity, social trust, and shared prosperity—backed by measurable safeguards and accountability.

  1. Human dignity and agency first — AI must expand human choice, rights, and privacy.
  2. Safety before scale — We will test, red-team, monitor, and implement fail-safes before high-impact deployment.
  3. Accountability with remedies — A named human owner is responsible; harms will be traceable and remediated.
  4. Transparency in high-stakes use — Disclose AI involvement and enable appropriate explanations and independent evaluation.
  5. Consentful, lawful data practices — Respect consent and lawful basis; no covert extraction or exploitative sur veillance.
  6. Fairness and inclusion by design — Measure and reduce bias; ensure access across languages, regions, incomes, and abilities.
  7. Shared prosperity — Invest in reskilling and just transition; avoid models that externalize societal costs.
  8. Information integrity — Deter deception, manipulation, and misinformation; support provenance and responsible political use.
  9. Protect human identity in the agentic era — No impersonation or actions without explicit authorization; users can revoke permissions.
  10. Security and planetary responsibility — Prevent misuse and reduce environmental footprint through efficient, transparent operations.

Thanks you for your commitment for a safer and more fulfilling future for our kids and generations to come. 

 

avatar of the starter
Umakant SoniPetition StarterI have spent last 16 years in AI. Cofounder of pi Ventures, AI foundry and ARTPARK (AI &amp; Robotics Technology Park ). <a href="http://www.linkedin.com/in/soniumakant" rel="nofollow">www.linkedin.com/in/soniumakant</a>

The Decision Makers

Ramesh Raskar
Ramesh Raskar
https://www.linkedin.com/in/raskar/

Petition Updates

Share this petition

Petition created on January 22, 2026