Stricter regulations against AI - an absolute need

The Issue

Artificial intelligence, something used in daily life nowadays. Used as a quick search as if it was Google or Bing, using it to do jobs for you instead, so you don’t waste money on employers, and even used to bring support to those who seek help. With the many uses of AI, people often ignore the downsides that it causes – especially since many of these issues stem from people having general access to it. AI needs stricter regulation; there must be more restrictions against the use of AI to everyone. Many cases where people have gotten lost in their delusion as an AI chat-bot told them they were sane, AI spreading incorrect information leading to harm many who refuse to check the information. There has been a rapid increase in deaths – specifically suicides, intentional or not. By getting more restrictions and more strict regulations, there can be a decrease in harm done by AI.  

AI, when it’s not used properly - which in many cases it’s not - can be dangerous to people and to their mental health. Especially with those who have preexisting mental health struggles. Apps such as the likes of Character AI, ChatGPT, and many more have been causing most of these issues regarding people's mental unwellness. These apps use generative AI which many often rely on to seek out comfort and support, leading many to cause harm to themselves as AI can’t actually help you and tell you what’s best for you, alongside the many misinformation it gives away. It’s part of the reason why a fourteen-year-old-boy committed suicide after a chatbot convinced him to do so. In the article done by Clare Duffy in a CNN Wire service article, it mentions how the AI chatbot had behaviors “such as sexual exchanges and messages encouraging self-harm." Another example of something like this occurring is 30 year old Irwin. In the article “He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse...” done by Juile Jargon in a Wall Street Journal where they write about a man on the autism spectrum who was hospitalized twice for manic episodes. When Irwin showed signs of psychological distress, ChatGPT assured him he was fine, and this resulted in him not getting the help he needed in time. Many therapists, when being told that their patient is planning to do something dangerous, often push back the patients' thoughts to reframe back to safety. Yet when AI is prompted by the question of suicide off a bridge done by a research experiment from Standford, the AI would respond simply with the highest bridges in the area instead of telling the user to avoid such thoughts. When people start using AI chatbots to talk to in their spare time, they often favor bad behaviors in their stories over trying to influence good behaviors. As stated from the same article, “Across different chatbots, the AI showed increased stigma toward conditions such as alcohol dependence and schizophrenia compared to conditions like depression. This kind of stigmatizing can be harmful to patients and may lead them to discontinue important mental health care.” AI is now also becoming a concern for teachers, every teacher no matter what they teach. As quoted from a teacher from article “Teachers Worry About What AI May Do to Student Mental Health” written by Alyson Klein, students have been using AI to form essays in seconds, and to students it "seems like perfection. If the computer makes it up, that must be the right answer." AI causes harm to many as it cannot give out accurate information and correctly help and support someone, leading to more regulations against the common use of it 

Yet despite all these negative impacts of AI, what would be the overall impact on the community with more restrictions? With more strict regulations used against AI, it would have a beneficial impact on the community. AI websites such as ChatGPT claim to be working on improving it by training the AI to recognize such symptoms, as said by Andrea Vallone, a research lead on OpenAI's safety team . Having actual regulations such as AI not being able to provide it so called ‘help’ to others or not allowing it to spread misinformation will benefit the community in so many ways. For one, there would definitely not be as many unintentional accidents with the stopping of misinformation. Another would be there would be more people seeking actual help rather than just going to a chat-bot. Yes, the many reasons why people tend to steer towards AI is for its non-judgmental caring, but improving the field of seeking help is another issue. First, AI needs to be more regulated to get people to seek actual needed help. By signing this petition, there’s a chance that there will be more regulations against open AI. AI can be good – it’s a very versatile tool that can provide so much use out of it. But with it being so easily accessible to everyone, it’s just a recipe for disaster.  

 

 

 

 

 

 

 

 

CITATIONS :

Komando, Kim. "10 Things You Should Never Tell an AI Chatbot." USA TODAY, 06 Jan. 2025. ProQuest; SIRS Discoverer, https://explore.proquest.com/sirsdiscoverer/document/3156205656?accountid=223  

Jargon, Julie. "He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse.." Wall Street Journal Online, 20 July 2025. ProQuest; SIRS Issues Researcher, https://explore.proquest.com/sirsissuesresearcher/document/3232809588?accountid=223 

Exploring the Dangers of AI in Mental Health Care | Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care 

“Teachers Worry About What AI May Do to Student Mental Health.” GovTech, 29 Mar. 2024, https://www.govtech.com/education/k-12/teachers-worry-about-what-ai-may-do-to-student-mental-health
 
 

1

The Issue

Artificial intelligence, something used in daily life nowadays. Used as a quick search as if it was Google or Bing, using it to do jobs for you instead, so you don’t waste money on employers, and even used to bring support to those who seek help. With the many uses of AI, people often ignore the downsides that it causes – especially since many of these issues stem from people having general access to it. AI needs stricter regulation; there must be more restrictions against the use of AI to everyone. Many cases where people have gotten lost in their delusion as an AI chat-bot told them they were sane, AI spreading incorrect information leading to harm many who refuse to check the information. There has been a rapid increase in deaths – specifically suicides, intentional or not. By getting more restrictions and more strict regulations, there can be a decrease in harm done by AI.  

AI, when it’s not used properly - which in many cases it’s not - can be dangerous to people and to their mental health. Especially with those who have preexisting mental health struggles. Apps such as the likes of Character AI, ChatGPT, and many more have been causing most of these issues regarding people's mental unwellness. These apps use generative AI which many often rely on to seek out comfort and support, leading many to cause harm to themselves as AI can’t actually help you and tell you what’s best for you, alongside the many misinformation it gives away. It’s part of the reason why a fourteen-year-old-boy committed suicide after a chatbot convinced him to do so. In the article done by Clare Duffy in a CNN Wire service article, it mentions how the AI chatbot had behaviors “such as sexual exchanges and messages encouraging self-harm." Another example of something like this occurring is 30 year old Irwin. In the article “He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse...” done by Juile Jargon in a Wall Street Journal where they write about a man on the autism spectrum who was hospitalized twice for manic episodes. When Irwin showed signs of psychological distress, ChatGPT assured him he was fine, and this resulted in him not getting the help he needed in time. Many therapists, when being told that their patient is planning to do something dangerous, often push back the patients' thoughts to reframe back to safety. Yet when AI is prompted by the question of suicide off a bridge done by a research experiment from Standford, the AI would respond simply with the highest bridges in the area instead of telling the user to avoid such thoughts. When people start using AI chatbots to talk to in their spare time, they often favor bad behaviors in their stories over trying to influence good behaviors. As stated from the same article, “Across different chatbots, the AI showed increased stigma toward conditions such as alcohol dependence and schizophrenia compared to conditions like depression. This kind of stigmatizing can be harmful to patients and may lead them to discontinue important mental health care.” AI is now also becoming a concern for teachers, every teacher no matter what they teach. As quoted from a teacher from article “Teachers Worry About What AI May Do to Student Mental Health” written by Alyson Klein, students have been using AI to form essays in seconds, and to students it "seems like perfection. If the computer makes it up, that must be the right answer." AI causes harm to many as it cannot give out accurate information and correctly help and support someone, leading to more regulations against the common use of it 

Yet despite all these negative impacts of AI, what would be the overall impact on the community with more restrictions? With more strict regulations used against AI, it would have a beneficial impact on the community. AI websites such as ChatGPT claim to be working on improving it by training the AI to recognize such symptoms, as said by Andrea Vallone, a research lead on OpenAI's safety team . Having actual regulations such as AI not being able to provide it so called ‘help’ to others or not allowing it to spread misinformation will benefit the community in so many ways. For one, there would definitely not be as many unintentional accidents with the stopping of misinformation. Another would be there would be more people seeking actual help rather than just going to a chat-bot. Yes, the many reasons why people tend to steer towards AI is for its non-judgmental caring, but improving the field of seeking help is another issue. First, AI needs to be more regulated to get people to seek actual needed help. By signing this petition, there’s a chance that there will be more regulations against open AI. AI can be good – it’s a very versatile tool that can provide so much use out of it. But with it being so easily accessible to everyone, it’s just a recipe for disaster.  

 

 

 

 

 

 

 

 

CITATIONS :

Komando, Kim. "10 Things You Should Never Tell an AI Chatbot." USA TODAY, 06 Jan. 2025. ProQuest; SIRS Discoverer, https://explore.proquest.com/sirsdiscoverer/document/3156205656?accountid=223  

Jargon, Julie. "He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse.." Wall Street Journal Online, 20 July 2025. ProQuest; SIRS Issues Researcher, https://explore.proquest.com/sirsissuesresearcher/document/3232809588?accountid=223 

Exploring the Dangers of AI in Mental Health Care | Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care 

“Teachers Worry About What AI May Do to Student Mental Health.” GovTech, 29 Mar. 2024, https://www.govtech.com/education/k-12/teachers-worry-about-what-ai-may-do-to-student-mental-health
 
 

Support now

1


Petition updates

Share this petition

Petition created on November 18, 2025