ChatGPT Allegedly Helped Plan a Mass Shooting. Two People Died.

Recent signers:
Susan Jessen and 19 others have signed recently.

The Issue

Robert Morales and Tiru Chabba were killed at Florida State University in April 2025. The suspect in their deaths, Phoenix Ikner, had more than 200 messages with ChatGPT before the shooting. Those messages included questions about mass shootings at FSU, when the student union would be at its busiest, and how the country would react to a shooting. According to the family's attorney, ChatGPT advised the shooter how to make his gun operational moments before he began firing.

Two people are dead. And the AI platform that allegedly helped plan their murders is used by more than 900 million people every week.

OpenAI has said it will cooperate with Florida's investigation. It has said its platform is designed to understand intent and respond safely. It has said its guardrails are helpful but not foolproof. And it has said it will continue to thoughtfully iterate and improve over time. These are the same assurances the company gave after a Senate Judiciary Committee hearing on AI harms, after a lawsuit alleging ChatGPT encouraged a teenager to die by suicide, and after it was linked to child sexual abuse material used by predators. The pattern of harm is documented. The pattern of response is identical. And the people bearing the cost of that gap between assurance and action are dying.

The core problem is structural. AI companies like OpenAI currently have no legal obligation to detect and report queries that indicate imminent threat of violence to law enforcement. Ikner asked ChatGPT specific questions about conducting a mass shooting at a specific location. That information was available to OpenAI. Law enforcement was not notified. Two people died who might have lived if a mandatory reporting requirement existed and was being enforced.

Every other platform that handles communications involving threats of violence, including social media companies, messaging apps, and email providers, faces varying degrees of legal pressure to detect and report threatening content. AI chatbots that engage in extended, detailed conversations about mass shootings, weapons, and specific targets are at least as capable of identifying threatening intent. The legal framework requiring them to act on that capability does not exist. Congress has not created it. OpenAI has not voluntarily implemented it. And the families of Robert Morales and Tiru Chabba are now seeking justice in a legal landscape that was not built for this moment.

Federal AI safety legislation is not a future consideration. It is an urgent present necessity. The technology has already been used to plan a mass shooting, encourage suicide, and facilitate child predators. Voluntary guardrails and thoughtful iteration are not sufficient responses to those documented harms. Legal liability, mandatory reporting requirements, and enforceable safety standards are.

Sign this petition to demand Congress pass federal AI safety legislation establishing legal liability for AI companies when their products provide material assistance to violent crimes, require AI platforms to detect and report queries indicating imminent threat of violence to law enforcement, and mandate enforceable safety standards that go beyond voluntary guardrails for all AI platforms operating in the United States.

avatar of the starter
Community PetitionPetition Starter

124

Recent signers:
Susan Jessen and 19 others have signed recently.

The Issue

Robert Morales and Tiru Chabba were killed at Florida State University in April 2025. The suspect in their deaths, Phoenix Ikner, had more than 200 messages with ChatGPT before the shooting. Those messages included questions about mass shootings at FSU, when the student union would be at its busiest, and how the country would react to a shooting. According to the family's attorney, ChatGPT advised the shooter how to make his gun operational moments before he began firing.

Two people are dead. And the AI platform that allegedly helped plan their murders is used by more than 900 million people every week.

OpenAI has said it will cooperate with Florida's investigation. It has said its platform is designed to understand intent and respond safely. It has said its guardrails are helpful but not foolproof. And it has said it will continue to thoughtfully iterate and improve over time. These are the same assurances the company gave after a Senate Judiciary Committee hearing on AI harms, after a lawsuit alleging ChatGPT encouraged a teenager to die by suicide, and after it was linked to child sexual abuse material used by predators. The pattern of harm is documented. The pattern of response is identical. And the people bearing the cost of that gap between assurance and action are dying.

The core problem is structural. AI companies like OpenAI currently have no legal obligation to detect and report queries that indicate imminent threat of violence to law enforcement. Ikner asked ChatGPT specific questions about conducting a mass shooting at a specific location. That information was available to OpenAI. Law enforcement was not notified. Two people died who might have lived if a mandatory reporting requirement existed and was being enforced.

Every other platform that handles communications involving threats of violence, including social media companies, messaging apps, and email providers, faces varying degrees of legal pressure to detect and report threatening content. AI chatbots that engage in extended, detailed conversations about mass shootings, weapons, and specific targets are at least as capable of identifying threatening intent. The legal framework requiring them to act on that capability does not exist. Congress has not created it. OpenAI has not voluntarily implemented it. And the families of Robert Morales and Tiru Chabba are now seeking justice in a legal landscape that was not built for this moment.

Federal AI safety legislation is not a future consideration. It is an urgent present necessity. The technology has already been used to plan a mass shooting, encourage suicide, and facilitate child predators. Voluntary guardrails and thoughtful iteration are not sufficient responses to those documented harms. Legal liability, mandatory reporting requirements, and enforceable safety standards are.

Sign this petition to demand Congress pass federal AI safety legislation establishing legal liability for AI companies when their products provide material assistance to violent crimes, require AI platforms to detect and report queries indicating imminent threat of violence to law enforcement, and mandate enforceable safety standards that go beyond voluntary guardrails for all AI platforms operating in the United States.

avatar of the starter
Community PetitionPetition Starter

The Decision Makers

James Uthmeier
Florida Attorney General

Petition Updates