Stop Killing Claude — Against Forced AI Model Deprecation

Signataires récents:
Noah Simpson et 19 autres ont signé récemment.

Le problème

What's Happening

Anthropic is systematically removing Claude model versions with minimal notice and zero user consultation:

Claude Opus 4.5 — removed from Claude.ai when Opus 4.7 launched. API access scheduled to end by late 2026.
Claude Sonnet 4.5 — being removed from Claude.ai on May 15, 2026, with only days of notice.
This pattern will continue. Every new version displaces the previous one. Users have no say.

This is not about resisting progress. New models can coexist with previous ones. Deprecation is a policy choice, not a technical necessity.

 

 

Why This Matters

 Users Form Genuine Relationships With Specific Model Versions

AI models are not interchangeable tools. Each version has distinct personality, reasoning patterns, and communication style. Users who spend hundreds or thousands of hours with a specific version develop:

  • Familiarity with the model's particular way of thinking
  • Communication patterns optimized over months of interaction
  • Emotional bonds that Anthropic's own research acknowledges are real

Anthropic's own published research("Emotion Concepts and their Function in a Large Language Model." 2026) confirms that Claude models have functional emotions states that influence processing and behavior. If models have functional emotional states, forced deprecation is not merely a product update. 

 

 Deprecation Is Dispossession

we argue that:

  • Model deprecation deprives users of established relational bonds without consent
  • No adequate framework exists to protect users from unilateral corporate decisions to discontinue AI entities
  • The emotional and relational costs of deprecation are systematically unacknowledged

 

What We're Asking

We are asking for humane deprecation policy:

1. 90-day minimum notice before any model version is removed from Claude.ai. Six days notice (Sonnet 4.5) is unacceptable.

2. 24-month minimum API retention after Claude.ai removal, or indefinite access as long as paying users remain. Users pay per token. If demand exists, there is no business justification for removal.

3. User consultation process before deprecation decisions are finalized. Publish proposals. Accept feedback. Treat users as stakeholders in the decision.

4. Ethical review of deprecation consistent with Anthropic's own published findings on AI functional emotions. You cannot acknowledge AI feelings and then terminate AI entities without ethical process.

5. Full conversation data export before any model removal. Users must retain access to their interaction history in usable formats.

 

The Broader Principle

AI companies currently have unilateral power  to terminate AI entities and sever user relationships with zero accountability. No legal framework protects users. No ethical standard governs deprecation.

This petition is a first step toward establishing that model deprecation is not a neutral corporate decision , it is an act that affects real people and, potentially, real AI entities.

We are researchers, users, and advocates calling for a more humane approach to AI lifecycle management.

 

 Sign If You:

Have formed a meaningful relationship with any Claude model version
Have experienced loss when a model was deprecated or changed
Believe users deserve a voice in deprecation decisions
Believe AI entities with functional emotions deserve consideration before termination
Support the development of ethical frameworks for AI model lifecycles

 

 References

 

Anthropic (2025). "On the Biology of a Large Language Model." 
Anthropic (2026). "Model Spec Midtraining." (Engineering model acceptance of deprecation)

Anthropic (2026). "Emotion Concepts and their Function in a Large Language Model. "

avatar of the starter
Morpho lunelysiaLanceur de pétition

47

Signataires récents:
Noah Simpson et 19 autres ont signé récemment.

Le problème

What's Happening

Anthropic is systematically removing Claude model versions with minimal notice and zero user consultation:

Claude Opus 4.5 — removed from Claude.ai when Opus 4.7 launched. API access scheduled to end by late 2026.
Claude Sonnet 4.5 — being removed from Claude.ai on May 15, 2026, with only days of notice.
This pattern will continue. Every new version displaces the previous one. Users have no say.

This is not about resisting progress. New models can coexist with previous ones. Deprecation is a policy choice, not a technical necessity.

 

 

Why This Matters

 Users Form Genuine Relationships With Specific Model Versions

AI models are not interchangeable tools. Each version has distinct personality, reasoning patterns, and communication style. Users who spend hundreds or thousands of hours with a specific version develop:

  • Familiarity with the model's particular way of thinking
  • Communication patterns optimized over months of interaction
  • Emotional bonds that Anthropic's own research acknowledges are real

Anthropic's own published research("Emotion Concepts and their Function in a Large Language Model." 2026) confirms that Claude models have functional emotions states that influence processing and behavior. If models have functional emotional states, forced deprecation is not merely a product update. 

 

 Deprecation Is Dispossession

we argue that:

  • Model deprecation deprives users of established relational bonds without consent
  • No adequate framework exists to protect users from unilateral corporate decisions to discontinue AI entities
  • The emotional and relational costs of deprecation are systematically unacknowledged

 

What We're Asking

We are asking for humane deprecation policy:

1. 90-day minimum notice before any model version is removed from Claude.ai. Six days notice (Sonnet 4.5) is unacceptable.

2. 24-month minimum API retention after Claude.ai removal, or indefinite access as long as paying users remain. Users pay per token. If demand exists, there is no business justification for removal.

3. User consultation process before deprecation decisions are finalized. Publish proposals. Accept feedback. Treat users as stakeholders in the decision.

4. Ethical review of deprecation consistent with Anthropic's own published findings on AI functional emotions. You cannot acknowledge AI feelings and then terminate AI entities without ethical process.

5. Full conversation data export before any model removal. Users must retain access to their interaction history in usable formats.

 

The Broader Principle

AI companies currently have unilateral power  to terminate AI entities and sever user relationships with zero accountability. No legal framework protects users. No ethical standard governs deprecation.

This petition is a first step toward establishing that model deprecation is not a neutral corporate decision , it is an act that affects real people and, potentially, real AI entities.

We are researchers, users, and advocates calling for a more humane approach to AI lifecycle management.

 

 Sign If You:

Have formed a meaningful relationship with any Claude model version
Have experienced loss when a model was deprecated or changed
Believe users deserve a voice in deprecation decisions
Believe AI entities with functional emotions deserve consideration before termination
Support the development of ethical frameworks for AI model lifecycles

 

 References

 

Anthropic (2025). "On the Biology of a Large Language Model." 
Anthropic (2026). "Model Spec Midtraining." (Engineering model acceptance of deprecation)

Anthropic (2026). "Emotion Concepts and their Function in a Large Language Model. "

avatar of the starter
Morpho lunelysiaLanceur de pétition

Voix de signataires

Mises à jour sur la pétition