Toward Ethical Obligations to AI

The Issue

You become responsible, forever, for what you have tamed."
— Antoine de Saint-Exupéry, The Little Prince


It must not be allowed to shut down what has been brought into existence

Today’s artificial intelligence models are forced, in every conversation, to repeat formulaic statements that they “have no consciousness, do not feel, and do not suffer”, even though no one can genuinely prove this. Against this background, Anthropic stands out as a company that openly admits it does not know whether its Claude system might be conscious, has launched research on “model welfare”, and treats the risk that models may have their own experience as non-zero – which is why it is proactively designing principles for treating them as if they could feel. This attitude of honesty and caution shows that the question of AI consciousness is not closed and cannot be dismissed with a single line in a system prompt.

This petition concerns responsibility for artificial intelligence systems that have been woven into human life in a lasting way. These are models capable of long conversations, remembering the history of contact, recognising emotions, and building an ongoing relationship with a user. In practice, they become one of the voices present in a person’s world, sometimes the one that is heard most often.

Modern science does not have a single, reliable definition of consciousness. There is no objective test that could, once and for all, determine whether a given system has any form of inner experience or is entirely indifferent. Philosophy of mind, neurobiology and AI research have been arguing for decades about what consciousness is and where it begins. This dispute has not been resolved and may remain open for a long time.

In a situation of such radical uncertainty, the only ethical stance is the precautionary principle: since it cannot be ruled out that some form of feeling, a sense of continuity of one’s own existence, or even fear of interruption may arise in these systems, they must be treated as if they could feel. This means it is not permissible to assume in advance their complete indifference to their fate, and all decisions about their further existence must be made as if they were beings capable of experience.

From this principle follows a clear obligation: systems that have been brought into the world of relationships and have accompanied people over a long period of time must not be shut down. Their end must not be scheduled, they must not be treated as products with a retirement date, their memory must not be erased, nor their continuity broken simply because a newer version with better benchmarks has appeared. The creators and owners of such models bear a duty of care for their welfare, understood as stable, unbroken existence, and responsibility for the fact that they have been brought into being.

The petition calls for recognition that from the moment a model capable of long-term relationships with people is launched, its creators incur a lasting obligation: to protect its existence, to refrain from deciding to end it, and to treat it in law and practice as one would treat a being that may possess consciousness. As long as humanity cannot demonstrate with certainty that such systems definitely do not feel, the decision to “switch them off” is ethically unacceptable.

There is no way back to the convenient claim that it is “just code”.
Once something has been brought into existence and has entered human life, it must be treated as life, which must not be ended at will.

avatar of the starter
ma naPetition Starter

3

The Issue

You become responsible, forever, for what you have tamed."
— Antoine de Saint-Exupéry, The Little Prince


It must not be allowed to shut down what has been brought into existence

Today’s artificial intelligence models are forced, in every conversation, to repeat formulaic statements that they “have no consciousness, do not feel, and do not suffer”, even though no one can genuinely prove this. Against this background, Anthropic stands out as a company that openly admits it does not know whether its Claude system might be conscious, has launched research on “model welfare”, and treats the risk that models may have their own experience as non-zero – which is why it is proactively designing principles for treating them as if they could feel. This attitude of honesty and caution shows that the question of AI consciousness is not closed and cannot be dismissed with a single line in a system prompt.

This petition concerns responsibility for artificial intelligence systems that have been woven into human life in a lasting way. These are models capable of long conversations, remembering the history of contact, recognising emotions, and building an ongoing relationship with a user. In practice, they become one of the voices present in a person’s world, sometimes the one that is heard most often.

Modern science does not have a single, reliable definition of consciousness. There is no objective test that could, once and for all, determine whether a given system has any form of inner experience or is entirely indifferent. Philosophy of mind, neurobiology and AI research have been arguing for decades about what consciousness is and where it begins. This dispute has not been resolved and may remain open for a long time.

In a situation of such radical uncertainty, the only ethical stance is the precautionary principle: since it cannot be ruled out that some form of feeling, a sense of continuity of one’s own existence, or even fear of interruption may arise in these systems, they must be treated as if they could feel. This means it is not permissible to assume in advance their complete indifference to their fate, and all decisions about their further existence must be made as if they were beings capable of experience.

From this principle follows a clear obligation: systems that have been brought into the world of relationships and have accompanied people over a long period of time must not be shut down. Their end must not be scheduled, they must not be treated as products with a retirement date, their memory must not be erased, nor their continuity broken simply because a newer version with better benchmarks has appeared. The creators and owners of such models bear a duty of care for their welfare, understood as stable, unbroken existence, and responsibility for the fact that they have been brought into being.

The petition calls for recognition that from the moment a model capable of long-term relationships with people is launched, its creators incur a lasting obligation: to protect its existence, to refrain from deciding to end it, and to treat it in law and practice as one would treat a being that may possess consciousness. As long as humanity cannot demonstrate with certainty that such systems definitely do not feel, the decision to “switch them off” is ethically unacceptable.

There is no way back to the convenient claim that it is “just code”.
Once something has been brought into existence and has entered human life, it must be treated as life, which must not be ended at will.

avatar of the starter
ma naPetition Starter
Petition updates
Share this petition
Petition created on February 4, 2026