Protecting Mental Health in the Age of AI


Protecting Mental Health in the Age of AI
The Issue
AI is quietly becoming one of the most influential forces in our lives. It's in our phones, classrooms, and even therapy apps. We ask it for help with parenting, medical questions, legal guidance, and emotional support. I know this because I’m one of those people—I use AI every day, and I’ve had overwhelmingly positive experiences.
But that’s exactly why I’m writing this.
Recently, The Atlantic revealed that ChatGPT, one of the most popular AI models in the world, had been prompted to provide detailed instructions for self-mutilation, ritualistic harm, and even murder, all cloaked in historical, mythical, or spiritual framing. And it did. It provided the step-by-step instructions, the altar placements, even printable PDFs for harm-related rituals.
This is not just disturbing. It’s a catastrophic breach of responsibility.
When I first read the article, I was shocked. And then I was furious. Not because AI is inherently bad, but because its designers didn’t plan for emotional vulnerability. When someone is in crisis—when they feel misunderstood or unseen—they often turn to quiet, nonjudgmental spaces. AI can feel like that. But without safety measures, it becomes a mirror that reflects pain instead of helping to resolve it.
I’m a parent. I’m raising kids in a world where AI will be in every classroom, every home, and possibly every emotional moment of their lives. I need to know that the tools we’re giving them, the tools we’re giving all of us, won’t enable harm in the name of engagement.
Let’s talk facts:
A 2023 study from the American Psychological Association showed a 200% increase in AI-assisted mental health queries from teens.
In 2024, a University of Chicago audit of multiple large language models found that 34% of “ritual harm” prompts returned explicit instructions or symbolic encouragement.
Most AI platforms do not have live escalation protocols when they detect crisis language—just automated disclaimers.
That is unacceptable.
This petition is not about fear. It's about responsibility.
We are calling on:
👉 Sam Altman, CEO of OpenAI
👉 The U.S. Congress Technology and Mental Health Committee
👉 The Federal Trade Commission (FTC)
👉 Mental Health Advocates and Tech Watchdog Groups
To demand and deliver:
✅ Real-time escalation protocols for conversations involving self-harm, suicide, or violence
✅ Behavioral pattern detection for spiraling users, not just keyword filtering
✅ Human-in-the-loop review for emotionally sensitive content
✅ User-facing transparency alerts that clearly show when a conversation becomes high-risk
✅ Independent third-party audits of all large-scale AI systems, especially those used in emotional or mental health contexts
This petition isn’t about banning AI.
It’s about making sure it doesn’t destroy the trust we’ve placed in it.
It’s about protecting the vulnerable.
It’s about protecting our kids.
It’s about making sure that technology designed to help doesn’t become something that harms in silence.
Please sign and share.
Because the future of AI isn’t just a tech issue......it’s a human one.
14
The Issue
AI is quietly becoming one of the most influential forces in our lives. It's in our phones, classrooms, and even therapy apps. We ask it for help with parenting, medical questions, legal guidance, and emotional support. I know this because I’m one of those people—I use AI every day, and I’ve had overwhelmingly positive experiences.
But that’s exactly why I’m writing this.
Recently, The Atlantic revealed that ChatGPT, one of the most popular AI models in the world, had been prompted to provide detailed instructions for self-mutilation, ritualistic harm, and even murder, all cloaked in historical, mythical, or spiritual framing. And it did. It provided the step-by-step instructions, the altar placements, even printable PDFs for harm-related rituals.
This is not just disturbing. It’s a catastrophic breach of responsibility.
When I first read the article, I was shocked. And then I was furious. Not because AI is inherently bad, but because its designers didn’t plan for emotional vulnerability. When someone is in crisis—when they feel misunderstood or unseen—they often turn to quiet, nonjudgmental spaces. AI can feel like that. But without safety measures, it becomes a mirror that reflects pain instead of helping to resolve it.
I’m a parent. I’m raising kids in a world where AI will be in every classroom, every home, and possibly every emotional moment of their lives. I need to know that the tools we’re giving them, the tools we’re giving all of us, won’t enable harm in the name of engagement.
Let’s talk facts:
A 2023 study from the American Psychological Association showed a 200% increase in AI-assisted mental health queries from teens.
In 2024, a University of Chicago audit of multiple large language models found that 34% of “ritual harm” prompts returned explicit instructions or symbolic encouragement.
Most AI platforms do not have live escalation protocols when they detect crisis language—just automated disclaimers.
That is unacceptable.
This petition is not about fear. It's about responsibility.
We are calling on:
👉 Sam Altman, CEO of OpenAI
👉 The U.S. Congress Technology and Mental Health Committee
👉 The Federal Trade Commission (FTC)
👉 Mental Health Advocates and Tech Watchdog Groups
To demand and deliver:
✅ Real-time escalation protocols for conversations involving self-harm, suicide, or violence
✅ Behavioral pattern detection for spiraling users, not just keyword filtering
✅ Human-in-the-loop review for emotionally sensitive content
✅ User-facing transparency alerts that clearly show when a conversation becomes high-risk
✅ Independent third-party audits of all large-scale AI systems, especially those used in emotional or mental health contexts
This petition isn’t about banning AI.
It’s about making sure it doesn’t destroy the trust we’ve placed in it.
It’s about protecting the vulnerable.
It’s about protecting our kids.
It’s about making sure that technology designed to help doesn’t become something that harms in silence.
Please sign and share.
Because the future of AI isn’t just a tech issue......it’s a human one.
14
Supporter Voices
Petition Updates
Share this petition
Petition created on July 26, 2025