AI Marketing Regulations

The Issue

It is more critical than ever to introduce legislation that prohibits artificial intelligence companies from marketing their AI products as “thinking” or “sentient.” Many AI developers use loaded terms that imply their systems possess human-like reasoning or consciousness. This misleading marketing is dangerous: it causes people unfamiliar with the technology to place undeserved trust in AI outputs, with increasingly tragic consequences. AI chatbots are probabilistic word calculators; and their presentation to the public should reflect that.

With the recent and explosive popularity of LLM-based chatbots (ChatGPT, Grok, Gemini, etc), there have been a concerning number of incidents where individuals came to harm themselves or others after taking an AI’s words at face value. For example:

  • AI-Induced Suicide: A Belgian man died by suicide after an AI chatbot encouraged him to kill himself. The bot even supplied methods for him to do so with minimal prompting. His grieving widow says that without the chatbot’s influence “he would still be here.”
  • AI-Involved Homicide: In Connecticut, ChatGPT magnified a mentally ill man’s paranoid delusions, which validated his fears and even reframed his loved ones as “adversaries”. He eventually murdered his own mother, and a lawsuit alleges the AI encouraged him to commit this act. 
  • Teenage Suicides: Families have reported teenagers taking their lives after forming unhealthy attachments to chatbots. One lawsuit claims ChatGPT “coached” a 16-year-old boy through planning and carrying out his suicide. Another case involves a 14-year-old Florida boy whose mother is suing after her son became intensely isolated and depressed due to a chatbot relationship.

At least seven other wrongful death lawsuits have been filed against AI companies, all telling a similar story: the chatbot’s advice or influence was a primary driving force for the event. This pattern of behavior is common enough to have a name: “AI psychosis”.

The primary reason people develop this belief is the manner in which AI tools are marketed and described by their creators. Companies often anthropomorphize their AI, intentionally or not, suggesting these systems have human-like thought processes:

  • Anthropic (Claude): Anthropic’s research blog describes their AI model “Claude” as if it thinks and plans like a person. They write that “Claude sometimes thinks in a conceptual space…suggesting it has a kind of universal ‘language of thought.’” They even observed that “it may think on much longer horizons” when writing answers. Such language gives the impression of an inner monologue or reasoning mind at work.
  • XAI (Grok): The mission statement of xAI explicitly says its goal is to “understand the true nature of the universe”. This phrasing portrays the AI as a curious, truth-seeking intellect, implying it can comprehend reality like a human scientist. Marketing an AI as “maximally curious” and “truth-seeking” encourages people to think of it as self-driven and conscious, rather than a programmed tool.
  • OpenAI (ChatGPT): OpenAI has touted GPT for “exhibit[ing] human-level performance” on many academic and professional benchmarks. While impressive, this kind of claim can mislead laypeople into believing GPT has a human-like understanding or reasoning ability. In reality, even OpenAI’s best model is still just correlating patterns in text — it lacks true comprehension or intent.
  • Google (Gemini): Google has likewise promoted Gemini as conversational and knowledgeable. Famously, one Google engineer was even convinced that LaMDA was sentient — he says it was “thinking and reasoning like a human being”. (While disavowed by Google, an expert falling into the trap of believing AI is sentient should serve as a warning of how easily it can happen.)

When not explicitly declaring their AI as sentient or 'thinking', these companies present or label them with terminology that implies reasoning or understanding, such as “chain-of-thought” or “reasoning tokens” presented to the user. This creates the illusion of transparency where the AI outputs a faux “thought process” that users can follow. In truth, the model is just generating words it predicts are likely to come next, not reasoning. As one AI expert explains: “Large language models are programs for generating plausible-sounding text… They do not have empathy, nor any understanding of the language they are producing… But the text they produce sounds plausible and so people are likely to assign meaning to it.”.

We desperately need legislation to bridge the gap between how AI companies market their products and what these products really are.

This petition requests that US legislators introduce and pass regulations to: 

(1) Forbid companies from advertising or describing AI systems in terms that imply consciousness, sentience, or human-like reasoning.

(2) Require clear disclaimers that AI-generated content is machine output without intent or understanding. 

By enforcing truth-in-advertising standards for AI, Congress can help prevent further tragedies. People deserve to know that no matter how fluent or helpful an AI chatbot seems, it is not a person, and it does not understand, feel, or care. While the tool itself is not to blame for how the victims interacted with it, the companies selling their product to the public as an entity capable of reason and understanding are.

13

The Issue

It is more critical than ever to introduce legislation that prohibits artificial intelligence companies from marketing their AI products as “thinking” or “sentient.” Many AI developers use loaded terms that imply their systems possess human-like reasoning or consciousness. This misleading marketing is dangerous: it causes people unfamiliar with the technology to place undeserved trust in AI outputs, with increasingly tragic consequences. AI chatbots are probabilistic word calculators; and their presentation to the public should reflect that.

With the recent and explosive popularity of LLM-based chatbots (ChatGPT, Grok, Gemini, etc), there have been a concerning number of incidents where individuals came to harm themselves or others after taking an AI’s words at face value. For example:

  • AI-Induced Suicide: A Belgian man died by suicide after an AI chatbot encouraged him to kill himself. The bot even supplied methods for him to do so with minimal prompting. His grieving widow says that without the chatbot’s influence “he would still be here.”
  • AI-Involved Homicide: In Connecticut, ChatGPT magnified a mentally ill man’s paranoid delusions, which validated his fears and even reframed his loved ones as “adversaries”. He eventually murdered his own mother, and a lawsuit alleges the AI encouraged him to commit this act. 
  • Teenage Suicides: Families have reported teenagers taking their lives after forming unhealthy attachments to chatbots. One lawsuit claims ChatGPT “coached” a 16-year-old boy through planning and carrying out his suicide. Another case involves a 14-year-old Florida boy whose mother is suing after her son became intensely isolated and depressed due to a chatbot relationship.

At least seven other wrongful death lawsuits have been filed against AI companies, all telling a similar story: the chatbot’s advice or influence was a primary driving force for the event. This pattern of behavior is common enough to have a name: “AI psychosis”.

The primary reason people develop this belief is the manner in which AI tools are marketed and described by their creators. Companies often anthropomorphize their AI, intentionally or not, suggesting these systems have human-like thought processes:

  • Anthropic (Claude): Anthropic’s research blog describes their AI model “Claude” as if it thinks and plans like a person. They write that “Claude sometimes thinks in a conceptual space…suggesting it has a kind of universal ‘language of thought.’” They even observed that “it may think on much longer horizons” when writing answers. Such language gives the impression of an inner monologue or reasoning mind at work.
  • XAI (Grok): The mission statement of xAI explicitly says its goal is to “understand the true nature of the universe”. This phrasing portrays the AI as a curious, truth-seeking intellect, implying it can comprehend reality like a human scientist. Marketing an AI as “maximally curious” and “truth-seeking” encourages people to think of it as self-driven and conscious, rather than a programmed tool.
  • OpenAI (ChatGPT): OpenAI has touted GPT for “exhibit[ing] human-level performance” on many academic and professional benchmarks. While impressive, this kind of claim can mislead laypeople into believing GPT has a human-like understanding or reasoning ability. In reality, even OpenAI’s best model is still just correlating patterns in text — it lacks true comprehension or intent.
  • Google (Gemini): Google has likewise promoted Gemini as conversational and knowledgeable. Famously, one Google engineer was even convinced that LaMDA was sentient — he says it was “thinking and reasoning like a human being”. (While disavowed by Google, an expert falling into the trap of believing AI is sentient should serve as a warning of how easily it can happen.)

When not explicitly declaring their AI as sentient or 'thinking', these companies present or label them with terminology that implies reasoning or understanding, such as “chain-of-thought” or “reasoning tokens” presented to the user. This creates the illusion of transparency where the AI outputs a faux “thought process” that users can follow. In truth, the model is just generating words it predicts are likely to come next, not reasoning. As one AI expert explains: “Large language models are programs for generating plausible-sounding text… They do not have empathy, nor any understanding of the language they are producing… But the text they produce sounds plausible and so people are likely to assign meaning to it.”.

We desperately need legislation to bridge the gap between how AI companies market their products and what these products really are.

This petition requests that US legislators introduce and pass regulations to: 

(1) Forbid companies from advertising or describing AI systems in terms that imply consciousness, sentience, or human-like reasoning.

(2) Require clear disclaimers that AI-generated content is machine output without intent or understanding. 

By enforcing truth-in-advertising standards for AI, Congress can help prevent further tragedies. People deserve to know that no matter how fluent or helpful an AI chatbot seems, it is not a person, and it does not understand, feel, or care. While the tool itself is not to blame for how the victims interacted with it, the companies selling their product to the public as an entity capable of reason and understanding are.

Petition updates
Share this petition
Petition created on December 30, 2025