Blurred Reality: Digital Transparency in Combating AI-Driven Dis/Misinformation on TikTok


Blurred Reality: Digital Transparency in Combating AI-Driven Dis/Misinformation on TikTok
The Issue
As AI-generated content becomes more realistic and harder to detect, even digitally literate users struggle to distinguish it from authentic media. It is essential to keep up with technology as it evolves to prevent the spread of disinformation to the public. I have seen older individuals, even including my family and peers, question media that seems obviously fake to me, and vice versa. This concerns me for the future, as AI will only grow to be less and less detectable within this digital era. A recent study found that "approximately one in four posts on TikTok contained misleading elements", with AI-generated content playing a key role in those numbers. A concern for regulation within media platforms is that the AI-generated content gains a lot of attention, engages viewers, and has become key in this new era of technology for companies. What's important to note is that these solutions that are looking to be implemented "do not prohibit AI use; instead, they impose transparency obligations designed to protect users, real people, and the public interest". This way, people still have trust in the media, and these platforms can still promote AI-generated content that brings in the views. This matter is very pressing because "what seems like a reasonable policy today may prove inadequate tomorrow. The challenge is creating frameworks flexible enough to adapt while providing sufficient protection against increasingly sophisticated synthetic media". That is why I propose that social media platforms, like TikTok, must be required to enforce mandatory, visible, and non-removable labeling systems on all altered media. This should be supported by a reporting option, automated detection, and user accountability measures with penalties for non-compliance to reduce dis/misinformation and preserve public trust in digital media.
TikTok has a foundation for dealing with AI-generated content, but it is not at the level it needs to be. The main goal is the prevention of AI-generated content as best as possible, and then correction. The solution would start with a pre-publication verification gate during the upload, and label the video if it contains AI-generated content before the video becomes publicly visible or fully distributed by the algorithm. A hybrid pressure model as well as an escalating accountability system would accompany the prevention protocol. This includes internal tools, such as labeling, reporting, and penalties, which would be combined with external pressure (public backlash and potential regulation), as well as audits, and then moving to consequences (fines and limits) only if TikTok fails to meet clear standards. TikTok will be audited specifically for the response time to reported content and the percentage of AI content detected. In order to keep oversight ethical, a hybrid audit system would be in place where the government sets the rules and penalties, but independent third-party auditors actually run the evaluations on TikTok. This should be within a 24-48-hour window because of how fast the platform spreads videos. There will be an appeal system if users feel a video was falsely accused of containing AI, which includes a review and reinstated note under videos that lasts for a week, because if it went viral when it was falsely labeled, a week is more than enough for users to see that it is safe (because of the 24-48-hour window to properly label AI content). For the appeal, I think it should be a two-layer appeal system where the moderation team handles it because there is an automatic detection, and if that caused a false positive, a TikTok team can go back and check the video for the appeal, and the second review option would be an independent or external audit body.
I think the solution I presented is a good representation of what needs to start being implemented in order to catch up in this era of blurred reality. At least some clarity needs to return to this digital era, where we can bring back trust in what we are seeing within the media to be known as true or not, and not have to constantly question what we are viewing. The people who have the most power to solve this would be the ones who run TikTok and are in charge of the ownership. Multiple companies hold ownership, such as ByteDance, Silver Lake, Oracle, and MGX. When it comes to the regulatory practices mentioned in the solution that should be implemented with the help of the government, President Donald Trump would be the decision maker under that part of the presented solution.

19
The Issue
As AI-generated content becomes more realistic and harder to detect, even digitally literate users struggle to distinguish it from authentic media. It is essential to keep up with technology as it evolves to prevent the spread of disinformation to the public. I have seen older individuals, even including my family and peers, question media that seems obviously fake to me, and vice versa. This concerns me for the future, as AI will only grow to be less and less detectable within this digital era. A recent study found that "approximately one in four posts on TikTok contained misleading elements", with AI-generated content playing a key role in those numbers. A concern for regulation within media platforms is that the AI-generated content gains a lot of attention, engages viewers, and has become key in this new era of technology for companies. What's important to note is that these solutions that are looking to be implemented "do not prohibit AI use; instead, they impose transparency obligations designed to protect users, real people, and the public interest". This way, people still have trust in the media, and these platforms can still promote AI-generated content that brings in the views. This matter is very pressing because "what seems like a reasonable policy today may prove inadequate tomorrow. The challenge is creating frameworks flexible enough to adapt while providing sufficient protection against increasingly sophisticated synthetic media". That is why I propose that social media platforms, like TikTok, must be required to enforce mandatory, visible, and non-removable labeling systems on all altered media. This should be supported by a reporting option, automated detection, and user accountability measures with penalties for non-compliance to reduce dis/misinformation and preserve public trust in digital media.
TikTok has a foundation for dealing with AI-generated content, but it is not at the level it needs to be. The main goal is the prevention of AI-generated content as best as possible, and then correction. The solution would start with a pre-publication verification gate during the upload, and label the video if it contains AI-generated content before the video becomes publicly visible or fully distributed by the algorithm. A hybrid pressure model as well as an escalating accountability system would accompany the prevention protocol. This includes internal tools, such as labeling, reporting, and penalties, which would be combined with external pressure (public backlash and potential regulation), as well as audits, and then moving to consequences (fines and limits) only if TikTok fails to meet clear standards. TikTok will be audited specifically for the response time to reported content and the percentage of AI content detected. In order to keep oversight ethical, a hybrid audit system would be in place where the government sets the rules and penalties, but independent third-party auditors actually run the evaluations on TikTok. This should be within a 24-48-hour window because of how fast the platform spreads videos. There will be an appeal system if users feel a video was falsely accused of containing AI, which includes a review and reinstated note under videos that lasts for a week, because if it went viral when it was falsely labeled, a week is more than enough for users to see that it is safe (because of the 24-48-hour window to properly label AI content). For the appeal, I think it should be a two-layer appeal system where the moderation team handles it because there is an automatic detection, and if that caused a false positive, a TikTok team can go back and check the video for the appeal, and the second review option would be an independent or external audit body.
I think the solution I presented is a good representation of what needs to start being implemented in order to catch up in this era of blurred reality. At least some clarity needs to return to this digital era, where we can bring back trust in what we are seeing within the media to be known as true or not, and not have to constantly question what we are viewing. The people who have the most power to solve this would be the ones who run TikTok and are in charge of the ownership. Multiple companies hold ownership, such as ByteDance, Silver Lake, Oracle, and MGX. When it comes to the regulatory practices mentioned in the solution that should be implemented with the help of the government, President Donald Trump would be the decision maker under that part of the presented solution.

19
The Decision Makers

Supporter Voices
Petition Updates
Share this petition
Petition created on April 28, 2026