Fix AI sensors on YouTube's moderation

Recent signers:
Alex Hogendorp and 14 others have signed recently.

The Issue

I’ve watched many YouTube channels pleading with their viewers, or simply saying their farewells due to unwarranted and harsh AI moderation actions. One channel I follow has recently fallen victim to this, and I personally experienced losing my own channel. Years of hard work vanished in an instant is beyond tragic and deeply unsettling, and not having a human to appeal to is soul crushing.

It is a dystopian world when your friends who appeared in videos are also penalized. YouTubers and viewers should not live in fear of enjoying their passions, provided they adhere to clearly defined guidelines that have been the basis of the site for years. It’s unjust for YouTube’s AI to flag content based on flawed algorithms that lead to wrongful terminations and penalties. And larger channels avoiding similar punishment is not right.

YouTube’s moderation system relies heavily on AI sensors to sift through millions of videos. Many videos flagged are more nuanced than the AI can detect. Some are medical/informational channels getting flagged, while other channels with blatant pornography remain untouched. YouTube Kids has a particularly bad problem with inappropriate content featuring children’s media (think ElsaGate).

When AI sensors misinterpret the context and flag content incorrectly, creators can lose their channels without a clear path to appeal or rectify errors. Then, any action taken available to restore the channel is limited and inadequate. Many, many creators use this as a source of income, and a way to connect with likeminded individuals, sharing a similar interest in content. These channel owners use their content to not only share crucial information, but a way to express themselves or to destress from the chaotic lives we all live. And many viewers rely on these channels for their content; weather updates, important world news, updates on games or movies, doctors reacting to misinformation spread online, etc.

Consider this: according to Statista.com, Q4 of 2023 alone saw the termination of 20.59 million channels. Q1 of 2024 saw 15.79 million, Q2 3.26 million, Q3 4.87 million, and Q4 4.82 million channels lost. An estimated 5 million channels in the first half of 2025 have been terminated. The leading cause of the flags being “Spam, deceptive practices & scams”, categories HEAVILY moderated by AI, and channel strikes issued by AI sensors are either false positives or based on ambiguous infractions. Not only has the AI been flagging channels erroneously, they will terminate them immediately, despite having no prior offense, and reject appeals based on said erroneous AI information. The age of human intervention has come to an end on the site, and the hubris and mismanagement will be the end of the site in its entirety.

There is a dire need for YouTube to drastically overhaul its AI moderation system. We NEED to show the importance of human intervention to avoid these errors. Things such as a better training module for the AI to take note if it finds something inappropriate is necessary, and sending it to a higher chain of command is crucial. We cannot and should not solely rely on AI to run a video hosting website, to avoid all these false positives and false reports leaving a lasting impact on channels. We also need to provide a clear, timely, fair, and HUMAN appeal process for creators to contest decisions.

We need to show YouTube their content creators are the foundation of their site, and they need to start protecting them by prioritising and correcting this massive oversight. Fixing or removing their flawed AI moderation system would not only prevent needless losses but also promote a nurturing and secure environment for creators to thrive.

Please sign this petition so that YouTube acknowledges this pressing concern and takes actionable steps to safeguard creative voices on its platform.

avatar of the starter
Amy RosePetition Starter

26

Recent signers:
Alex Hogendorp and 14 others have signed recently.

The Issue

I’ve watched many YouTube channels pleading with their viewers, or simply saying their farewells due to unwarranted and harsh AI moderation actions. One channel I follow has recently fallen victim to this, and I personally experienced losing my own channel. Years of hard work vanished in an instant is beyond tragic and deeply unsettling, and not having a human to appeal to is soul crushing.

It is a dystopian world when your friends who appeared in videos are also penalized. YouTubers and viewers should not live in fear of enjoying their passions, provided they adhere to clearly defined guidelines that have been the basis of the site for years. It’s unjust for YouTube’s AI to flag content based on flawed algorithms that lead to wrongful terminations and penalties. And larger channels avoiding similar punishment is not right.

YouTube’s moderation system relies heavily on AI sensors to sift through millions of videos. Many videos flagged are more nuanced than the AI can detect. Some are medical/informational channels getting flagged, while other channels with blatant pornography remain untouched. YouTube Kids has a particularly bad problem with inappropriate content featuring children’s media (think ElsaGate).

When AI sensors misinterpret the context and flag content incorrectly, creators can lose their channels without a clear path to appeal or rectify errors. Then, any action taken available to restore the channel is limited and inadequate. Many, many creators use this as a source of income, and a way to connect with likeminded individuals, sharing a similar interest in content. These channel owners use their content to not only share crucial information, but a way to express themselves or to destress from the chaotic lives we all live. And many viewers rely on these channels for their content; weather updates, important world news, updates on games or movies, doctors reacting to misinformation spread online, etc.

Consider this: according to Statista.com, Q4 of 2023 alone saw the termination of 20.59 million channels. Q1 of 2024 saw 15.79 million, Q2 3.26 million, Q3 4.87 million, and Q4 4.82 million channels lost. An estimated 5 million channels in the first half of 2025 have been terminated. The leading cause of the flags being “Spam, deceptive practices & scams”, categories HEAVILY moderated by AI, and channel strikes issued by AI sensors are either false positives or based on ambiguous infractions. Not only has the AI been flagging channels erroneously, they will terminate them immediately, despite having no prior offense, and reject appeals based on said erroneous AI information. The age of human intervention has come to an end on the site, and the hubris and mismanagement will be the end of the site in its entirety.

There is a dire need for YouTube to drastically overhaul its AI moderation system. We NEED to show the importance of human intervention to avoid these errors. Things such as a better training module for the AI to take note if it finds something inappropriate is necessary, and sending it to a higher chain of command is crucial. We cannot and should not solely rely on AI to run a video hosting website, to avoid all these false positives and false reports leaving a lasting impact on channels. We also need to provide a clear, timely, fair, and HUMAN appeal process for creators to contest decisions.

We need to show YouTube their content creators are the foundation of their site, and they need to start protecting them by prioritising and correcting this massive oversight. Fixing or removing their flawed AI moderation system would not only prevent needless losses but also promote a nurturing and secure environment for creators to thrive.

Please sign this petition so that YouTube acknowledges this pressing concern and takes actionable steps to safeguard creative voices on its platform.

avatar of the starter
Amy RosePetition Starter

The Decision Makers

YouTube Trust and Safety Team
YouTube Trust and Safety Team
Youtube Board of Directors
Youtube Board of Directors

Supporter Voices

Petition updates
Share this petition
Petition created on December 10, 2025