Protect Children from Ideological Bias in AI

Recent signers:
Laura Wass and 19 others have signed recently.

The Issue

Protect Children from Ideological Bias in AI: Demand Transparency and Accountability from OpenAI

AI platforms like ChatGPT are influencing how children understand gender, mental health, and identity. But many responses are based on flawed, politicized science—with no transparency or parental oversight. We demand accountability from OpenAI to protect vulnerable users, uphold scientific integrity, and respect parents’ rights  

To: OpenAI, AI Developers, Educational Institutions, and Regulatory Leaders

From: Concerned Parents, Educators, Health Professionals, and Citizens


We, the undersigned, are calling for immediate action to address a growing and serious concern: the use of biased, ideologically driven, or medically disputed information in artificial intelligence systems like ChatGPT—particularly when accessed by children and adolescents.

 

🚨 Why We’re Concerned:

AI tools are now used by millions of young people to explore deeply personal questions about identity, mental health, and belonging. But many responses on these sensitive topics—especially gender identity, suicide risk, and medical interventions—rely on flawed or discredited studies, presenting them as objective truth.

For example:

AI often cites studies that have been formally corrected or publicly discredited, without acknowledging those corrections.
Dissenting scientific voices, critical analyses, or accounts of regret and detransition are routinely excluded, even when relevant.
Children may be encouraged—implicitly or explicitly—to consider irreversible medical interventions without adequate warnings or context.

This is not just misinformation—it is a violation of trust, scientific responsibility, and parental rights.

🧭 What We’re Asking For:

We demand that OpenAI and other AI developers implement the following:


Transparency: Clearly disclose the sources, studies, and data behind AI responses on contested issues—especially those involving minors.
Balance: Present diverse and evidence-based perspectives, including critiques of mainstream gender ideology, and stories of regret, detransition, or harm.
Disclaimers: Add visible warnings or disclaimers to content that discusses suicide, gender identity, or irreversible medical treatments—particularly when accessed by underage users.
Parental Oversight: Provide tools and settings that allow parents to monitor or restrict access to ideologically sensitive content in AI platforms.
Independent Review: Establish a transparent review board—including medical experts, parents, ethicists, and detransitioners—to audit outputs and policies for bias and harm.
Respect for Belief Diversity: AI should not promote one worldview over others—especially not in areas involving spiritual beliefs, biological realities, or moral values.

🛑 Why It Matters:

We are not anti-technology. We are not anti-trans. We are pro-truth, pro-child, and pro-parent. Technology should serve families, not undermine them. And science should be open to scrutiny, not weaponized for ideology.

We believe that children deserve better. Parents deserve transparency. And AI companies like OpenAI must be held accountable.

Sign this petition to protect our children from biased AI content and to restore trust, truth, and transparency in the tools that shape their future.

294

Recent signers:
Laura Wass and 19 others have signed recently.

The Issue

Protect Children from Ideological Bias in AI: Demand Transparency and Accountability from OpenAI

AI platforms like ChatGPT are influencing how children understand gender, mental health, and identity. But many responses are based on flawed, politicized science—with no transparency or parental oversight. We demand accountability from OpenAI to protect vulnerable users, uphold scientific integrity, and respect parents’ rights  

To: OpenAI, AI Developers, Educational Institutions, and Regulatory Leaders

From: Concerned Parents, Educators, Health Professionals, and Citizens


We, the undersigned, are calling for immediate action to address a growing and serious concern: the use of biased, ideologically driven, or medically disputed information in artificial intelligence systems like ChatGPT—particularly when accessed by children and adolescents.

 

🚨 Why We’re Concerned:

AI tools are now used by millions of young people to explore deeply personal questions about identity, mental health, and belonging. But many responses on these sensitive topics—especially gender identity, suicide risk, and medical interventions—rely on flawed or discredited studies, presenting them as objective truth.

For example:

AI often cites studies that have been formally corrected or publicly discredited, without acknowledging those corrections.
Dissenting scientific voices, critical analyses, or accounts of regret and detransition are routinely excluded, even when relevant.
Children may be encouraged—implicitly or explicitly—to consider irreversible medical interventions without adequate warnings or context.

This is not just misinformation—it is a violation of trust, scientific responsibility, and parental rights.

🧭 What We’re Asking For:

We demand that OpenAI and other AI developers implement the following:


Transparency: Clearly disclose the sources, studies, and data behind AI responses on contested issues—especially those involving minors.
Balance: Present diverse and evidence-based perspectives, including critiques of mainstream gender ideology, and stories of regret, detransition, or harm.
Disclaimers: Add visible warnings or disclaimers to content that discusses suicide, gender identity, or irreversible medical treatments—particularly when accessed by underage users.
Parental Oversight: Provide tools and settings that allow parents to monitor or restrict access to ideologically sensitive content in AI platforms.
Independent Review: Establish a transparent review board—including medical experts, parents, ethicists, and detransitioners—to audit outputs and policies for bias and harm.
Respect for Belief Diversity: AI should not promote one worldview over others—especially not in areas involving spiritual beliefs, biological realities, or moral values.

🛑 Why It Matters:

We are not anti-technology. We are not anti-trans. We are pro-truth, pro-child, and pro-parent. Technology should serve families, not undermine them. And science should be open to scrutiny, not weaponized for ideology.

We believe that children deserve better. Parents deserve transparency. And AI companies like OpenAI must be held accountable.

Sign this petition to protect our children from biased AI content and to restore trust, truth, and transparency in the tools that shape their future.

The Decision Makers

Sue Dalangin
Sue Dalangin
Volunteer

Supporter Voices

Petition Updates

Share this petition

Petition created on July 21, 2025