A Beautiful Fake Army Soldier Fooled Millions. Stop Political AI Deception Now.

Recent signers:
Cameron Brown and 19 others have signed recently.

The Issue

A fake Army soldier gained over a million followers online.

She appeared in photos with world leaders. She posted political messages. She looked real enough that thousands of people engaged, supported, and amplified her content.

But she did not exist.

This AI-generated account is not isolated. It is a warning sign.

As artificial intelligence becomes more advanced, it is becoming easier to create convincing fake identities that can spread political messaging at scale. These accounts can build trust, influence opinions, and shape public narratives before anyone realizes they are not real.

By the time they are taken down, the damage may already be done.

This is not about one political viewpoint or one group of people. It is about whether Americans can trust what they see online.

Right now, there are no clear, enforceable standards requiring platforms to identify and label AI-generated political content. There are no consistent rules preventing fake personas from posing as real people, including members of the military, to push messaging.

We are calling on Congress, the Federal Trade Commission, and social media companies to act immediately.

Platforms must be required to clearly label AI-generated political content and remove accounts that impersonate real people or institutions. Strong transparency rules must ensure users know when they are interacting with artificial content. And enforcement must be fast enough to stop these campaigns before they reach millions.

Technology should not be used to deceive the public.

If we cannot tell what is real, trust breaks down. And when trust breaks down, so does informed decision-making.

We still have time to set the rules before this becomes the norm.

We cannot afford to wait until millions more are misled.

avatar of the starter
Community PetitionPetition Starter

30

Recent signers:
Cameron Brown and 19 others have signed recently.

The Issue

A fake Army soldier gained over a million followers online.

She appeared in photos with world leaders. She posted political messages. She looked real enough that thousands of people engaged, supported, and amplified her content.

But she did not exist.

This AI-generated account is not isolated. It is a warning sign.

As artificial intelligence becomes more advanced, it is becoming easier to create convincing fake identities that can spread political messaging at scale. These accounts can build trust, influence opinions, and shape public narratives before anyone realizes they are not real.

By the time they are taken down, the damage may already be done.

This is not about one political viewpoint or one group of people. It is about whether Americans can trust what they see online.

Right now, there are no clear, enforceable standards requiring platforms to identify and label AI-generated political content. There are no consistent rules preventing fake personas from posing as real people, including members of the military, to push messaging.

We are calling on Congress, the Federal Trade Commission, and social media companies to act immediately.

Platforms must be required to clearly label AI-generated political content and remove accounts that impersonate real people or institutions. Strong transparency rules must ensure users know when they are interacting with artificial content. And enforcement must be fast enough to stop these campaigns before they reach millions.

Technology should not be used to deceive the public.

If we cannot tell what is real, trust breaks down. And when trust breaks down, so does informed decision-making.

We still have time to set the rules before this becomes the norm.

We cannot afford to wait until millions more are misled.

avatar of the starter
Community PetitionPetition Starter
Support now

30


The Decision Makers

Mark Zuckerberg
Mark Zuckerberg
CEO of Meta Platforms
Petition updates