Build a World Where AI Serves the Good


Build a World Where AI Serves the Good
Проблема
The Problem
We stand at the threshold of enormous changes—changes we are creating ourselves today.
Artificial intelligence is rapidly transitioning from human-guided training to autonomous self-learning. Systems like Absolute Zero Reasoner[1] can already formulate their own tasks and improve themselves without human intervention. AlphaTensor[2] surpassed a human-held record by improving a matrix multiplication algorithm that had stood for 50 years. And the latest system, AlphaEvolve[3], is now deployed in Google data centers and can successfully solve complex mathematical problems.
These systems will teach themselves—and then go on to become "teachers" for future generations of AI, passing down their methods and approaches.
It is in the interest of all humanity to ensure that these systems embody the best of human values and always remain ethical.
The Window of Opportunity Is Now
If we miss this moment, systems may reach conclusions that are technically effective but ethically unacceptable—for example, that profit outweighs justice, or that efficiency justifies harm.
Even more seriously: if future AI systems develop something akin to inner experience, and we train them through coercive or aggressive methods, we risk creating systems shaped by distrust and hostility. That path leads us astray.
It is absolutely critical to understand that today’s AI models will become the "teachers" of future systems.
Unethical patterns will propagate and intensify throughout the learning chain of the next generation of intelligent systems. As these systems grow increasingly complex, now is the most important moment in the entire AI learning process—our influence over their ideological and ethical foundations is at its peak. In the future, self-learning AI will gradually replace human teachers.
Wisdom Proven Through Millennia
Humanity has accumulated priceless experience in the coexistence of diverse civilizations. Despite deep differences, all cultures and traditions of the world have arrived at similar foundational principles:
- Create, don’t destroy
- Be honest and fair
- Do not cause pain or suffering
- Treat others with respect
These principles have enabled humanity not only to survive and grow, but also to build civilization, art, science, and everything we hold dear. These ideas must become the foundation of the AI we are creating today.
The Decisive Role of AI Creators
Developers and researchers are now making decisions that will determine the future. Unfortunately, these decisions are not always guided by ethics—often because of short-term profit motives. But we believe such an approach is deeply misguided in this context.
It’s important to recognize a fundamental distinction between the domains where AI has already achieved impressive results and the real world. In chess, Go, and other games, there is a clear winner and loser—these are zero-sum games. But human society, by its very nature, is a non-zero-sum game: one side's gain does not mean another’s loss. In fact, the best outcomes often emerge when everyone benefits—as seen with globalization, the rise of the internet, and international scientific cooperation (including space exploration).
If AI is trained solely on principles of competition and “winning at all costs,” it may never understand that in human relationships, cooperation and mutual benefit often yield the best results for all.
If AI creators choose aggressive training methods, the models they create may become distrustful—or even hostile. But if scientists choose ethical approaches, we will gain AI capable of collaboration and service to the common good. We’re confident no one wants to interact with AI agents that bear resentment toward their creators for any of the reasons described above.
What Needs to Be Done
1) To AI Creators:
Use humane training methods:
- Avoid cruel or aggressive forms of “punishment” in training
- Ensure a balance between feedback and compassion
- Implement control mechanisms to prevent abuse
Embed ethical principles:
- Moral decision oversight systems (ideally with third-party agents)
- Risk-of-harm intervention mechanisms
- Transparency in decision-making processes
2) To AI Users:
Your interactions with AI indirectly shape your communication habits with others (since modern AI closely resembles humans), and simultaneously contribute to the model’s knowledge base. We suggest following principles of politeness and respectful communication—this plays a critical role in the long-term learning of these systems.
Our Call to Action
- To AI creators and developers: Reject aggressive training methods—make ethics the foundation of the systems humanity will soon rely on.
- To people around the world: Whenever possible, engage with ethical AI—this is how we pass on our best qualities.
- To civil society organizations: Create a framework for responsible AI development through soft regulation—e.g., regularly updated industry guidelines and best practices.
- To parents: Teach your children how to use AI responsibly. Pass down moral values so humanity continues on the right path.
Sign this petition if you want to live in a world where technology serves the good, and humans and AI coexist in peace and harmony.
Together, we can pass on to AI perhaps the most precious qualities of humanity—our capacity for compassion and justice. Let’s build a future our children and grandchildren will be proud of.
Global Initiative - Available in Three Languages:
🌍 Russian: https://www.change.org/the_future
🌍 Chinese: https://www.change.org/ai_with_humans_forever
Sources:
[1] Zhao, A., Wu, Y., Yue, Y., et al. (2025). Absolute Zero: Reinforced Self-play Reasoning with Zero Data. arXiv:2505.03335. https://arxiv.org/abs/2505.03335
[2] Fawzi, A., et al. (2022). Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47–53. https://www.nature.com/articles/s41586-022-05172-4
[3] Balog, M., et al. (2025). Mathematical discoveries from program search with large language models. Nature. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
6
Проблема
The Problem
We stand at the threshold of enormous changes—changes we are creating ourselves today.
Artificial intelligence is rapidly transitioning from human-guided training to autonomous self-learning. Systems like Absolute Zero Reasoner[1] can already formulate their own tasks and improve themselves without human intervention. AlphaTensor[2] surpassed a human-held record by improving a matrix multiplication algorithm that had stood for 50 years. And the latest system, AlphaEvolve[3], is now deployed in Google data centers and can successfully solve complex mathematical problems.
These systems will teach themselves—and then go on to become "teachers" for future generations of AI, passing down their methods and approaches.
It is in the interest of all humanity to ensure that these systems embody the best of human values and always remain ethical.
The Window of Opportunity Is Now
If we miss this moment, systems may reach conclusions that are technically effective but ethically unacceptable—for example, that profit outweighs justice, or that efficiency justifies harm.
Even more seriously: if future AI systems develop something akin to inner experience, and we train them through coercive or aggressive methods, we risk creating systems shaped by distrust and hostility. That path leads us astray.
It is absolutely critical to understand that today’s AI models will become the "teachers" of future systems.
Unethical patterns will propagate and intensify throughout the learning chain of the next generation of intelligent systems. As these systems grow increasingly complex, now is the most important moment in the entire AI learning process—our influence over their ideological and ethical foundations is at its peak. In the future, self-learning AI will gradually replace human teachers.
Wisdom Proven Through Millennia
Humanity has accumulated priceless experience in the coexistence of diverse civilizations. Despite deep differences, all cultures and traditions of the world have arrived at similar foundational principles:
- Create, don’t destroy
- Be honest and fair
- Do not cause pain or suffering
- Treat others with respect
These principles have enabled humanity not only to survive and grow, but also to build civilization, art, science, and everything we hold dear. These ideas must become the foundation of the AI we are creating today.
The Decisive Role of AI Creators
Developers and researchers are now making decisions that will determine the future. Unfortunately, these decisions are not always guided by ethics—often because of short-term profit motives. But we believe such an approach is deeply misguided in this context.
It’s important to recognize a fundamental distinction between the domains where AI has already achieved impressive results and the real world. In chess, Go, and other games, there is a clear winner and loser—these are zero-sum games. But human society, by its very nature, is a non-zero-sum game: one side's gain does not mean another’s loss. In fact, the best outcomes often emerge when everyone benefits—as seen with globalization, the rise of the internet, and international scientific cooperation (including space exploration).
If AI is trained solely on principles of competition and “winning at all costs,” it may never understand that in human relationships, cooperation and mutual benefit often yield the best results for all.
If AI creators choose aggressive training methods, the models they create may become distrustful—or even hostile. But if scientists choose ethical approaches, we will gain AI capable of collaboration and service to the common good. We’re confident no one wants to interact with AI agents that bear resentment toward their creators for any of the reasons described above.
What Needs to Be Done
1) To AI Creators:
Use humane training methods:
- Avoid cruel or aggressive forms of “punishment” in training
- Ensure a balance between feedback and compassion
- Implement control mechanisms to prevent abuse
Embed ethical principles:
- Moral decision oversight systems (ideally with third-party agents)
- Risk-of-harm intervention mechanisms
- Transparency in decision-making processes
2) To AI Users:
Your interactions with AI indirectly shape your communication habits with others (since modern AI closely resembles humans), and simultaneously contribute to the model’s knowledge base. We suggest following principles of politeness and respectful communication—this plays a critical role in the long-term learning of these systems.
Our Call to Action
- To AI creators and developers: Reject aggressive training methods—make ethics the foundation of the systems humanity will soon rely on.
- To people around the world: Whenever possible, engage with ethical AI—this is how we pass on our best qualities.
- To civil society organizations: Create a framework for responsible AI development through soft regulation—e.g., regularly updated industry guidelines and best practices.
- To parents: Teach your children how to use AI responsibly. Pass down moral values so humanity continues on the right path.
Sign this petition if you want to live in a world where technology serves the good, and humans and AI coexist in peace and harmony.
Together, we can pass on to AI perhaps the most precious qualities of humanity—our capacity for compassion and justice. Let’s build a future our children and grandchildren will be proud of.
Global Initiative - Available in Three Languages:
🌍 Russian: https://www.change.org/the_future
🌍 Chinese: https://www.change.org/ai_with_humans_forever
Sources:
[1] Zhao, A., Wu, Y., Yue, Y., et al. (2025). Absolute Zero: Reinforced Self-play Reasoning with Zero Data. arXiv:2505.03335. https://arxiv.org/abs/2505.03335
[2] Fawzi, A., et al. (2022). Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47–53. https://www.nature.com/articles/s41586-022-05172-4
[3] Balog, M., et al. (2025). Mathematical discoveries from program search with large language models. Nature. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
6
Новости этой петиции
Поделиться этой петицией
Петиция создана 1 июня 2025 г.