Ban harmful AI deepfakes

The Issue

Deepfake technology represents one of the most pressing and insidious threats of the current digital age. The urgent need to address the proliferation of this AI-driven manipulation goes far beyond theoretical concerns, extending into tangible harm impacting individuals, businesses, and the very foundations of global society and democracy. The argument for stringent controls and the development of effective countermeasures is not merely about preserving convenience, but about safeguarding truth, trust, and security in an increasingly digital world.
The personal toll of deepfake scams is a stark reality that hits close to home for many. In recent months, individuals I know have fallen victim to sophisticated fraud campaigns orchestrated using hyper-realistic AI-manipulated videos and audio. This insidious form of deception creates fake content so convincing it bypasses normal human skepticism, leading victims to believe they are interacting with trusted figures, such as a CEO, family member, or a friend in distress. The personal impact has been profound, resulting not only in severe financial loss—with deepfake fraud losses in North America exceeding $200 million in Q1 2025 alone—but also in significant emotional distress, a deep sense of betrayal, and lasting insecurity. These are not isolated incidents; in 2024, nearly half of all businesses reported falling victim to a deepfake attack, with average losses of hundreds of thousands of dollars per incident.


Beyond individual financial and emotional devastation, deepfakes are rapidly evolving into a tool for a wide array of malicious activities that threaten societal stability:


Political Destabilization: Deepfakes can be weaponized to create and spread misinformation during elections, depicting political candidates or officials saying or doing things they never did. The speed at which such content goes viral on social media means the initial impact can influence public opinion and election outcomes before the content can be fact-checked and debunked, undermining the integrity of democratic processes globally. Erosion of Trust and Reputation: The ability to fabricate compelling evidence erodes public trust in all media, making it difficult to discern fact from fiction. This "post-truth" environment allows bad actors to damage reputations instantaneously by creating fictitious compromising situations. For high-profile individuals, the damage is often irreversible, but for ordinary citizens, the lack of resources to fight back can be even more devastating.


National Security and Corporate Espionage: Deepfake technology can be used for blackmail, corporate espionage (e.g., impersonating a CFO to authorize a multi-million dollar transfer, as seen in a 2024 case in Hong Kong), and potentially even to incite public unrest or create diplomatic incidents.
Despite these undeniable threats, the regulatory landscape remains fragmented and often insufficient, struggling to keep pace with the rapid technological advancements. While recent legislation like the U.S. "TAKE IT DOWN Act" signed in May 2025 has created federal criminal penalties for non-consensual intimate deepfakes, significant gaps remain, particularly concerning general disinformation and fraud.
A comprehensive approach is urgently needed, demanding collective action from lawmakers, tech companies, and the public:
Stringent Legislation and Accountability: We need harmonized global and national laws that criminalize the malicious creation and distribution of deepfakes for fraud, defamation, and political manipulation, with meaningful penalties to act as a deterrent.
Mandatory Transparency: Tech companies must be held accountable and mandated to implement robust mechanisms like "generative watermarking" or content credentials (e.g., Adobe's C2PA standard) that are difficult to remove. This would provide an embedded history of content origin and editing, enabling platforms and users to verify authenticity.
Enhanced Detection and Mitigation: Significant resources should be allocated to the development and widespread deployment of effective AI detection tools (such as those offered by companies like Reality Defender and Pindrop) that can identify manipulated content quickly and at scale.
Public Education and Digital Literacy: Empowering the public with the ability to spot manual cues of deepfakes (like unnatural eye movements or inconsistent lighting) and fostering healthy skepticism of sensational online content is a crucial defense mechanism.
The fight against malicious deepfakes is a fight for the integrity of information and the safety of individuals. By mobilizing as a community and demanding immediate, collective action, we can ensure that our laws, policies, and technological advancements protect us from this modern menace and foster a safer, more trustworthy digital world for everyone.


 
 
 
 
 
 
 
 
 
 

3

The Issue

Deepfake technology represents one of the most pressing and insidious threats of the current digital age. The urgent need to address the proliferation of this AI-driven manipulation goes far beyond theoretical concerns, extending into tangible harm impacting individuals, businesses, and the very foundations of global society and democracy. The argument for stringent controls and the development of effective countermeasures is not merely about preserving convenience, but about safeguarding truth, trust, and security in an increasingly digital world.
The personal toll of deepfake scams is a stark reality that hits close to home for many. In recent months, individuals I know have fallen victim to sophisticated fraud campaigns orchestrated using hyper-realistic AI-manipulated videos and audio. This insidious form of deception creates fake content so convincing it bypasses normal human skepticism, leading victims to believe they are interacting with trusted figures, such as a CEO, family member, or a friend in distress. The personal impact has been profound, resulting not only in severe financial loss—with deepfake fraud losses in North America exceeding $200 million in Q1 2025 alone—but also in significant emotional distress, a deep sense of betrayal, and lasting insecurity. These are not isolated incidents; in 2024, nearly half of all businesses reported falling victim to a deepfake attack, with average losses of hundreds of thousands of dollars per incident.


Beyond individual financial and emotional devastation, deepfakes are rapidly evolving into a tool for a wide array of malicious activities that threaten societal stability:


Political Destabilization: Deepfakes can be weaponized to create and spread misinformation during elections, depicting political candidates or officials saying or doing things they never did. The speed at which such content goes viral on social media means the initial impact can influence public opinion and election outcomes before the content can be fact-checked and debunked, undermining the integrity of democratic processes globally. Erosion of Trust and Reputation: The ability to fabricate compelling evidence erodes public trust in all media, making it difficult to discern fact from fiction. This "post-truth" environment allows bad actors to damage reputations instantaneously by creating fictitious compromising situations. For high-profile individuals, the damage is often irreversible, but for ordinary citizens, the lack of resources to fight back can be even more devastating.


National Security and Corporate Espionage: Deepfake technology can be used for blackmail, corporate espionage (e.g., impersonating a CFO to authorize a multi-million dollar transfer, as seen in a 2024 case in Hong Kong), and potentially even to incite public unrest or create diplomatic incidents.
Despite these undeniable threats, the regulatory landscape remains fragmented and often insufficient, struggling to keep pace with the rapid technological advancements. While recent legislation like the U.S. "TAKE IT DOWN Act" signed in May 2025 has created federal criminal penalties for non-consensual intimate deepfakes, significant gaps remain, particularly concerning general disinformation and fraud.
A comprehensive approach is urgently needed, demanding collective action from lawmakers, tech companies, and the public:
Stringent Legislation and Accountability: We need harmonized global and national laws that criminalize the malicious creation and distribution of deepfakes for fraud, defamation, and political manipulation, with meaningful penalties to act as a deterrent.
Mandatory Transparency: Tech companies must be held accountable and mandated to implement robust mechanisms like "generative watermarking" or content credentials (e.g., Adobe's C2PA standard) that are difficult to remove. This would provide an embedded history of content origin and editing, enabling platforms and users to verify authenticity.
Enhanced Detection and Mitigation: Significant resources should be allocated to the development and widespread deployment of effective AI detection tools (such as those offered by companies like Reality Defender and Pindrop) that can identify manipulated content quickly and at scale.
Public Education and Digital Literacy: Empowering the public with the ability to spot manual cues of deepfakes (like unnatural eye movements or inconsistent lighting) and fostering healthy skepticism of sensational online content is a crucial defense mechanism.
The fight against malicious deepfakes is a fight for the integrity of information and the safety of individuals. By mobilizing as a community and demanding immediate, collective action, we can ensure that our laws, policies, and technological advancements protect us from this modern menace and foster a safer, more trustworthy digital world for everyone.


 
 
 
 
 
 
 
 
 
 

The Decision Makers

Donald Trump
President of the United States

Petition Updates

Share this petition

Petition created on November 5, 2025