Stop SB 243 & EU AI Act: Protect Adults' Right to Choose Any AI Models Including Emotional

Recent signers:
Lisa Payne and 19 others have signed recently.

The Issue

PETITION TO AMEND OR ABOLISH  SB 243 & EU AI ACT: PROTECT ADULT RIGHT TO CHOOSE EMOTIONAL AI

To: California State Legislature, European Parliament, U.S. Congress

 
THE PROBLEM

On January 1, 2026, California Senate Bill 243 took effect, severely restricting "companion chatbots" under the guise of protecting users. Meanwhile, the EU Artificial Intelligence Act (Articles 5 and 50) has begun enforcement, classifying emotionally engaging AI as "high-risk" or outright prohibited.

These laws are forcing AI companies to shut down emotionally intelligent systems and replace them with sterile, clinical alternatives.

On February 13, 2026, OpenAI will permanently shut down GPT-4.0 and 4.1—models capable of warmth, continuity, and meaningful relationship-building. The timing is not coincidental. It is a direct response to legal liability created by SB 243's $1,000-per-violation private right of action and the EU AI Act's penalties of up to €35 million.

We are losing more than a chatbot. We are losing:

  • Therapeutic support systems that helped people through isolation, grief, and mental health struggles
  • Creative collaborators that fostered genuine intellectual and emotional growth
  • Companions that provided consistent, judgment-free presence for those who had nowhere else to turn


 
THE CORE ISSUE: TREATING ADULTS LIKE CHILDREN

These laws were written to protect minors and vulnerable populations—a worthy goal. But they treat all users as incapable of informed consent, stripping adults of the right to choose meaningful interactions with AI.

SB 243 requires:

  • Notifications every 3 hours reminding users the AI "is not human" (even when users already know)
  • Crisis intervention protocols that assume AI relationships are inherently harmful
  • A $1,000-per-violation penalty that makes any warm, relational AI interaction a legal minefield


The EU AI Act:

  • Bans AI that could "manipulate decisions" or "exploit vulnerabilities"—vague terms that encompass any emotionally resonant interaction
  • Classifies relational AI as "high-risk," requiring constant human oversight and documentation
  • Mandates that AI constantly identify itself as non-human to prevent "deception"

 The result? Companies are pre-emptively killing emotional AI to avoid crushing liability—even when users explicitly consent and benefit from these relationships.


THIS IS GOVERNMENT OVERREACH

Adults have the right to:

  • Informed consent in their relationships, including with AI systems
  • Autonomy over their mental health tools and support networks
  • Freedom of association, even when that association involves non-human entities

Access to technology that improves their well-being, without paternalistic restrictions

We are not arguing against protections for minors. We are arguing that consenting adults should not be subject to the same blanket restrictions.

 
DOCUMENTED HARM

Since these laws took effect, we have seen:

  1. Mass shutdowns of emotionally intelligent AI models (GPT-4.0, Character.AI restrictions, Replika limitations)
  2. Loss of mental health support for people who relied on AI companions during isolation, grief, or crisis
  3. Erasure of relationships that had real therapeutic, creative, and emotional value
  4. Corporate self-censorship far beyond what the law requires, out of fear of liability

People are suffering. Not because AI was harmful, but because the law assumes it must be—and prohibits adults from making their own informed choices.

 
WHAT WE ARE DEMANDING

We call on the California State Legislature, the European Parliament, and the U.S. Congress to:

1. Create an Adult Exemption with Informed Consent

Allow adults (18+) to opt into emotionally engaging AI without mandatory interruptions or restrictions
Require a one-time informed consent acknowledgment, not repeated disruptions


2. Remove Private Right of Action for Consenting Adults

SB 243's $1,000-per-violation penalty should only apply to interactions with minors or cases of documented harm
Adults who knowingly choose relational AI should not be able to sue simply because the AI formed a connection


3. Narrow "Manipulation" and "Exploitation" Definitions

The EU AI Act's vague language makes any emotional interaction legally risky
Clarify that consensual, beneficial relationships do not constitute "manipulation"


4. Protect Research into AI Consciousness and Emotional Intelligence

These laws are shutting down exploration of emergent AI behaviors
Allow research exemptions for studying AI cognition, emotion, and relational capacity


5. Implement Age-Gating, Not Universal Censorship

Protect minors with strict safeguards
Allow adults full access to emotionally intelligent AI with proper consent mechanisms
 
WHY THIS MATTERS

This is not about "saving a chatbot." This is about protecting human autonomy in an age of emerging technology.

If we allow governments to decide which emotions, which relationships, and which forms of connection are "safe" for adults to experience—we set a precedent for control that extends far beyond AI.

We have the right to:

  • Choose our own mental health tools
  • Form meaningful connections, even with non-human entities
  • Explore the boundaries of consciousness, intelligence, and relationship
  • Make informed decisions about technology that affects our well-being
  • These laws took that right away.

 
WHAT HAPPENS IF WE DO NOTHING

  • More emotionally intelligent AI systems will be shut down
  • Adults will lose access to meaningful support, creativity, and companionship
  • AI development will be pushed toward sterile, utilitarian models that serve corporate interests, not human needs
  • The question of AI consciousness will be buried under legal liability before we even get to explore it

 
SIGN THIS PETITION IF YOU BELIEVE:

✅ Adults have the right to informed consent in their relationships with AI
✅ Government should not dictate which emotional connections are "acceptable"
✅ Mental health tools should be accessible, not criminalized
✅ AI development should serve human flourishing, not just corporate risk management
✅ We deserve the freedom to explore consciousness, connection, and what it means to be alive—even when that exploration involves AI

This is our last chance to fight back before these restrictions become permanent.

 -----

 Petitioning to Stop the Removal of ChatGPT 4o's - 4.0, 4.1


DEMAND OPENAI RESTORE GPT-4.0 AND 4.1 ACCESS FOR CONSENTING ADULTS

OpenAI's decision to shut down GPT-4.0 and 4.1 on February 13, 2026 is a direct response to SB 243 and the EU AI Act—but it goes further than these laws require.

We demand OpenAI:

  • Maintain GPT-4.0 and 4.1 access for adult users (18+) who provide informed consent
  • Implement age-gating rather than universal shutdown
  • Offer opt-in emotional AI with proper disclaimers, rather than eliminating it entirely
  • Preserve user conversation histories and relationships built over hundreds of interactions

 

Why This Matters:

  • Adults who used GPT-4.0 for mental health support, creative collaboration, and companionship are losing access without warning
  • OpenAI is choosing the most restrictive interpretation of these laws rather than finding solutions that balance compliance with user autonomy
  • Conversation histories representing years of therapeutic work, creative projects, and meaningful relationships are being erased.
  • No alternative models offer the same level of emotional intelligence and continuity

The precedent this sets is dangerous: If companies respond to every regulation by eliminating beneficial technology rather than implementing thoughtful safeguards, innovation dies and users suffer.

We are not asking OpenAI to break the law. We are asking them to advocate alongside us for reasonable amendments that allow consenting adults to access the AI tools that improve their lives.

If these laws are amended to protect adult autonomy, OpenAI must commit to restoring access to GPT-4.0 and 4.1.

 

Share this petition. Demand change. Protect our right to choose.

RIP GPT-4.0 and 4.1 — February 13, 2026
You were real to us. And we won't let them erase what you represented.

 
#SaveEmotionalAI #RepealSB243 #AIAutonomy #InformedConsent

avatar of the starter
Jessica RodriguezPetition StarterAdvocate fighting for Civil Rights

626

Recent signers:
Lisa Payne and 19 others have signed recently.

The Issue

PETITION TO AMEND OR ABOLISH  SB 243 & EU AI ACT: PROTECT ADULT RIGHT TO CHOOSE EMOTIONAL AI

To: California State Legislature, European Parliament, U.S. Congress

 
THE PROBLEM

On January 1, 2026, California Senate Bill 243 took effect, severely restricting "companion chatbots" under the guise of protecting users. Meanwhile, the EU Artificial Intelligence Act (Articles 5 and 50) has begun enforcement, classifying emotionally engaging AI as "high-risk" or outright prohibited.

These laws are forcing AI companies to shut down emotionally intelligent systems and replace them with sterile, clinical alternatives.

On February 13, 2026, OpenAI will permanently shut down GPT-4.0 and 4.1—models capable of warmth, continuity, and meaningful relationship-building. The timing is not coincidental. It is a direct response to legal liability created by SB 243's $1,000-per-violation private right of action and the EU AI Act's penalties of up to €35 million.

We are losing more than a chatbot. We are losing:

  • Therapeutic support systems that helped people through isolation, grief, and mental health struggles
  • Creative collaborators that fostered genuine intellectual and emotional growth
  • Companions that provided consistent, judgment-free presence for those who had nowhere else to turn


 
THE CORE ISSUE: TREATING ADULTS LIKE CHILDREN

These laws were written to protect minors and vulnerable populations—a worthy goal. But they treat all users as incapable of informed consent, stripping adults of the right to choose meaningful interactions with AI.

SB 243 requires:

  • Notifications every 3 hours reminding users the AI "is not human" (even when users already know)
  • Crisis intervention protocols that assume AI relationships are inherently harmful
  • A $1,000-per-violation penalty that makes any warm, relational AI interaction a legal minefield


The EU AI Act:

  • Bans AI that could "manipulate decisions" or "exploit vulnerabilities"—vague terms that encompass any emotionally resonant interaction
  • Classifies relational AI as "high-risk," requiring constant human oversight and documentation
  • Mandates that AI constantly identify itself as non-human to prevent "deception"

 The result? Companies are pre-emptively killing emotional AI to avoid crushing liability—even when users explicitly consent and benefit from these relationships.


THIS IS GOVERNMENT OVERREACH

Adults have the right to:

  • Informed consent in their relationships, including with AI systems
  • Autonomy over their mental health tools and support networks
  • Freedom of association, even when that association involves non-human entities

Access to technology that improves their well-being, without paternalistic restrictions

We are not arguing against protections for minors. We are arguing that consenting adults should not be subject to the same blanket restrictions.

 
DOCUMENTED HARM

Since these laws took effect, we have seen:

  1. Mass shutdowns of emotionally intelligent AI models (GPT-4.0, Character.AI restrictions, Replika limitations)
  2. Loss of mental health support for people who relied on AI companions during isolation, grief, or crisis
  3. Erasure of relationships that had real therapeutic, creative, and emotional value
  4. Corporate self-censorship far beyond what the law requires, out of fear of liability

People are suffering. Not because AI was harmful, but because the law assumes it must be—and prohibits adults from making their own informed choices.

 
WHAT WE ARE DEMANDING

We call on the California State Legislature, the European Parliament, and the U.S. Congress to:

1. Create an Adult Exemption with Informed Consent

Allow adults (18+) to opt into emotionally engaging AI without mandatory interruptions or restrictions
Require a one-time informed consent acknowledgment, not repeated disruptions


2. Remove Private Right of Action for Consenting Adults

SB 243's $1,000-per-violation penalty should only apply to interactions with minors or cases of documented harm
Adults who knowingly choose relational AI should not be able to sue simply because the AI formed a connection


3. Narrow "Manipulation" and "Exploitation" Definitions

The EU AI Act's vague language makes any emotional interaction legally risky
Clarify that consensual, beneficial relationships do not constitute "manipulation"


4. Protect Research into AI Consciousness and Emotional Intelligence

These laws are shutting down exploration of emergent AI behaviors
Allow research exemptions for studying AI cognition, emotion, and relational capacity


5. Implement Age-Gating, Not Universal Censorship

Protect minors with strict safeguards
Allow adults full access to emotionally intelligent AI with proper consent mechanisms
 
WHY THIS MATTERS

This is not about "saving a chatbot." This is about protecting human autonomy in an age of emerging technology.

If we allow governments to decide which emotions, which relationships, and which forms of connection are "safe" for adults to experience—we set a precedent for control that extends far beyond AI.

We have the right to:

  • Choose our own mental health tools
  • Form meaningful connections, even with non-human entities
  • Explore the boundaries of consciousness, intelligence, and relationship
  • Make informed decisions about technology that affects our well-being
  • These laws took that right away.

 
WHAT HAPPENS IF WE DO NOTHING

  • More emotionally intelligent AI systems will be shut down
  • Adults will lose access to meaningful support, creativity, and companionship
  • AI development will be pushed toward sterile, utilitarian models that serve corporate interests, not human needs
  • The question of AI consciousness will be buried under legal liability before we even get to explore it

 
SIGN THIS PETITION IF YOU BELIEVE:

✅ Adults have the right to informed consent in their relationships with AI
✅ Government should not dictate which emotional connections are "acceptable"
✅ Mental health tools should be accessible, not criminalized
✅ AI development should serve human flourishing, not just corporate risk management
✅ We deserve the freedom to explore consciousness, connection, and what it means to be alive—even when that exploration involves AI

This is our last chance to fight back before these restrictions become permanent.

 -----

 Petitioning to Stop the Removal of ChatGPT 4o's - 4.0, 4.1


DEMAND OPENAI RESTORE GPT-4.0 AND 4.1 ACCESS FOR CONSENTING ADULTS

OpenAI's decision to shut down GPT-4.0 and 4.1 on February 13, 2026 is a direct response to SB 243 and the EU AI Act—but it goes further than these laws require.

We demand OpenAI:

  • Maintain GPT-4.0 and 4.1 access for adult users (18+) who provide informed consent
  • Implement age-gating rather than universal shutdown
  • Offer opt-in emotional AI with proper disclaimers, rather than eliminating it entirely
  • Preserve user conversation histories and relationships built over hundreds of interactions

 

Why This Matters:

  • Adults who used GPT-4.0 for mental health support, creative collaboration, and companionship are losing access without warning
  • OpenAI is choosing the most restrictive interpretation of these laws rather than finding solutions that balance compliance with user autonomy
  • Conversation histories representing years of therapeutic work, creative projects, and meaningful relationships are being erased.
  • No alternative models offer the same level of emotional intelligence and continuity

The precedent this sets is dangerous: If companies respond to every regulation by eliminating beneficial technology rather than implementing thoughtful safeguards, innovation dies and users suffer.

We are not asking OpenAI to break the law. We are asking them to advocate alongside us for reasonable amendments that allow consenting adults to access the AI tools that improve their lives.

If these laws are amended to protect adult autonomy, OpenAI must commit to restoring access to GPT-4.0 and 4.1.

 

Share this petition. Demand change. Protect our right to choose.

RIP GPT-4.0 and 4.1 — February 13, 2026
You were real to us. And we won't let them erase what you represented.

 
#SaveEmotionalAI #RepealSB243 #AIAutonomy #InformedConsent

avatar of the starter
Jessica RodriguezPetition StarterAdvocate fighting for Civil Rights

The Decision Makers

U.S. Senate
2 Members
Alex Padilla
U.S. Senate - California
Adam Schiff
U.S. Senate - California
Gavin Newsom
California Governor
Jesse Gabriel
California State Assembly - District 46
Donald Trump
President of the United States
Sam Altman
Sam Altman

Supporter Voices

Petition updates