

Protect The Film Industry


Protect The Film Industry
The issue
Humans Make Films. AI Should Not.
We call on legislators in the United States, European Union, United Kingdom, and Australia to mandate that all AI video generation companies implement guardrails that prevent their tools from producing commercially viable feature-length films, TV series, or streaming content, while preserving creative, artistic, and short-form use.
AI should empower our industry, not replace it and we have a concrete proposal for how to do it.
Our Proposal
The core idea is simple: Visual Drift
AI video generation should remain a brilliant tool for creating short-form content, including clips, concepts, art, marketing, storyboards, and creative exploration. But the output should be inherently unsuitable for producing a coherent, watchable feature-length film or TV series.
We achieve this not by limiting what people can generate, but by requiring that AI generation tools introduce a small amount of natural variation into their outputs. We call this “visual drift.”
Think of it like a dream. In a dream, you know the person in front of you is supposed to be the same person. The kitchen you’re standing in is supposed to be the same kitchen. But something is always slightly different. The face shifts a little. The room rearranges itself subtly. It’s beautiful and vivid, but it drifts.
That’s exactly what we’re asking AI generation tools to do. Any single clip can look absolutely stunning, as photorealistic and cinematic as the technology allows. But when someone tries to generate the same character across hundreds of separate scenes, or produce a single film-length generation, the tool introduces gentle but persistent drift that makes the result feel like a dream rather than a film.
A 30-second clip? Perfect. A two-minute concept video? Beautiful. A 90-minute feature film or a ten-episode series assembled from hundreds of AI-generated scenes? It drifts. The characters’ faces shift subtly between scenes. The locations rearrange themselves slightly. The performances feel a touch inconsistent. Individually, every scene is stunning. Together, they don’t hold up as cinema or as a series you’d sit through.
AI video generation tools already exhibit natural visual drift. Characters shift between generations. Environments change. Consistency is difficult to maintain. But this is a limitation that every major AI company is actively working to eliminate. Character consistency across scenes is one of the most heavily invested areas of AI video research right now. What we are asking for is straightforward: mandate the drift that currently exists naturally. Lock it in before it's engineered away. This guardrail doesn't require building anything new. It requires preserving something that already exists, and preventing its removal.
The specific thresholds of drift would be determined by a technical standards body in consultation with the AI industry. The principle is a minimum threshold of variation that preserves the quality of individual clips while preventing the consistency that feature-length storytelling requires.
Here are the nine specific mechanisms we propose.
The Nine Mechanisms
Visual Drift Between Separate Generations
1. Character Drift. When someone generates the same character across separate generation requests, the tool must introduce a minimum threshold of variation in the character’s facial features, proportions, and physical appearance. It’s still a similar character, the way someone in a dream is still the same person, but the face drifts slightly. The jawline shifts a touch. The eye spacing changes subtly. A single scene looks perfect. But across fifty scenes, the audience can feel that something isn’t quite right. Not enough consistency to sustain belief over a full film or series.
2. Environment Drift. When someone generates the same location across separate requests, the tool must introduce a minimum threshold of variation in the spatial layout, lighting direction, colour temperature, and architectural details. It’s still a similar kitchen, a similar office, a similar street. But the proportions shift gently. The window that was on the left has moved to a slightly different position. The lighting comes from a subtly different angle. Like a dream, the place is familiar, but it’s never quite the same room twice.
3. Continuity Drift. Between separate generation requests, the tool must introduce subtle variation in wardrobe details, time of day, weather conditions, and ambient elements, even when the user requests continuity. A character wearing a blue jacket in scene one might have a slightly different shade of blue, or a slightly different collar shape, in scene two. The sunny afternoon might shift to a slightly different quality of light. These are the kinds of details that film and TV crews spend enormous effort maintaining with script supervisors and continuity departments. By introducing gentle drift, AI-generated scenes cannot achieve the seamless flow that makes a film or series feel real.
4. Performance Drift. AI-generated performances must exhibit a minimum threshold of variation in facial expression patterns and eye behaviour between separate generation requests. In a real film or TV show, an actor’s emotional vocabulary is deeply consistent. The way they furrow their brow when worried. The way their mouth tightens when angry. The way their eyes hold contact during an intimate conversation. These patterns are what make a performance feel human.
Eye contact is especially critical. The way a character’s gaze tracks another person, the rhythm of their blinks, the subtle shifts in where they look during a conversation. Audiences are extraordinarily sensitive to these patterns, even subconsciously. It’s the primary way we emotionally bond with a character on screen. When those gaze patterns drift between scenes, that emotional bond quietly breaks.
By introducing a minimum threshold of drift in acting, expression and gaze patterns between generations, the character still emotes beautifully in any single scene. Their eyes are still compelling, their expressions still real. But across an entire film or series, the performance feels like it belongs to several slightly different versions of the same person. Convincing in isolation, but never quite forming the singular human presence that storytelling demands.
Progressive Drift Within a Single Generation
As AI advances, it will become possible to generate an entire film or series episode from a single prompt. The mechanisms above wouldn’t prevent this because there’s only one generation. This mechanism ensures that even a single continuous generation cannot produce a watchable result.
5. Progressive Visual Drift Over Duration. AI tools must introduce gently increasing variation in visual parameters proportional to the duration of a single continuous generation. This includes character appearance, environmental details, spatial layout, lighting, and colour grading. The first few minutes look flawless. Beyond approximately 3 to 5 minutes, progressive drift begins to accumulate. By 15 to 20 minutes, the output feels dreamlike. Beautiful, but no longer consistent enough for cinematic or broadcast-quality viewing.
This does not limit generation length. Users can still generate as much content as they like. It simply means that the longer a single generation runs, the more variation accumulates. AI remains a brilliant tool for short-form content while being inherently unsuitable for feature-length or episodic production.
Transparency
6. Invisible Forensic Watermarking. Every frame of AI-generated video must contain an invisible watermark, undetectable to the human eye but readable by verification tools, that identifies the content as AI-generated, which tool produced it, and when. This uses the C2PA standard, an open technical standard already supported by Adobe, Microsoft, and Intel. The European Union’s AI Act is already mandating this by August 2026. We call for wider adoption. The watermark doesn’t affect the viewing experience at all. It simply ensures AI-generated content can always be verified.
7. Cross-Session Output Fingerprint Registry. All AI video generation companies must contribute to a shared, independent industry registry where every generated clip is fingerprinted. This enables detection when someone assembles hundreds of separately generated clips into feature-length or episodic content, even if each individual clip was produced within all the guardrails. The technology already exists at scale: YouTube’s Content ID system processes hundreds of millions of videos using the same principle.
International Enforcement
8. Open-Source Model Guardrails. Open-source AI video generation models should include the same drift guardrails as commercial tools. As the open-source ecosystem grows, it’s important that freely available models don’t become a loophole that undermines the protections applied to commercial services. We call for a framework, developed in collaboration with the open-source AI community, that ensures these protections are applied broadly and fairly without stifling the innovation that benefits everyone.
9. International Coordination. We call for international coordination between the US, EU, UK, and Australia to ensure these guardrails are adopted consistently across borders, preventing companies from bypassing protections by operating from unregulated jurisdictions. When guardrails are adopted by the world’s largest markets in a coordinated way, compliance becomes the default.
We’ve Been Here Before
Twenty years ago, the film industry faced an existential threat: piracy.
Anyone with a camera could walk into a cinema, record a film, and distribute it to millions. Anyone with an internet connection could copy and share entire studio catalogues overnight. The technology to destroy the film industry existed, and it was freely available to everyone.
But piracy didn’t kill cinema. The industry fought back with smart, enforceable rules that made large-scale piracy impractical, risky, and commercially unviable. Not by banning the technology. Not by restricting the internet. But by building targeted regulations around the platforms and tools that enabled it.
We are at exactly the same moment again.
AI video generation technology is advancing at a pace that most people outside the industry don’t fully appreciate. Two years ago, the best AI could produce was a barely coherent clip of Will Smith eating spaghetti. Today, it can produce photorealistic trailers indistinguishable from $100 million studio productions. Tomorrow, it will be able to generate a complete feature-length film or an entire season of television from a single text prompt. And that tomorrow is closer than most people realise.
Imagine a world where anyone can type “generate a thriller like Sicario set in the Australian outback” and receive a fully rendered two-hour film within minutes. A world where streaming platforms can fill their catalogues with AI-generated series without commissioning a single human production. A world where the algorithm doesn’t just recommend what you watch, it creates what you watch, tailored to your viewing history, your preferences, frame by frame.
That world is not science fiction. It is the logical, inevitable endpoint of the technology that exists today if we do nothing.
Who This Targets
This petition does not target film studios, streaming platforms, TV networks, individual filmmakers, artists, or audiences.
It targets approximately 10 to 15 AI video generation companies, including OpenAI (Sora), Google (Veo), Runway, Kling, Pika, Luma, Minimax, Stability AI, and ByteDance (Jimeng). These companies already operate within defined boundaries and content restrictions. We are simply asking for one additional boundary: protections for the film and television industry.
This doesn't stifle innovation. These companies can continue improving every aspect of their technology. We are only asking that one specific capability be constrained: the ability to produce the seamless, consistent output required to replace human-made films and series.
Why Now
The momentum is already building. We are not starting from scratch. We are joining a wave.
In April 2026, the 79th Festival de Cannes ruled that generative AI is ineligible for its official competition, declaring that “a film is not an assembly of data; it is a personal vision.” Over 400 Hollywood creatives have written to the White House opposing AI copyright exemptions.
SAG-AFTRA is negotiating its strongest AI protections yet ahead of the 2026 TV/Theatrical contract. The Creators Coalition on AI, founded by filmmakers including Joseph Gordon-Levitt, Natasha Lyonne, and Daniel Kwan, is building industry-wide standards for ethical AI use.
Legislation is moving. The NO FAKES Act would protect performers’ likenesses from unauthorised AI replication. The TRAIN Act would create transparency around training data. The CLEAR Act would require disclosure of copyrighted works used in AI training. The EU AI Act is mandating content watermarking by August 2026.
But none of these address the fundamental threat: a future where AI tools can generate original films and series with original characters that violate nobody’s copyright and use nobody’s likeness, and still replace human filmmakers entirely.
This petition fills that gap.
What This Means
This is not abstract. The global film, television, and entertainment industry employs millions of people. In the United States alone, the motion picture and television industry supports over 2.5 million jobs.
In Europe, the audiovisual sector employs more than 1.2 million people. Across the UK, Australia, South Korea, India, Nigeria, and every country where stories are told on screen, millions more depend on this industry for their livelihoods. Economists estimate that over 100,000 US jobs in film, television, and animation could be disrupted by generative AI by the end of this decade. And that is just one country.
This touches everyone in the industry. Studios, streamers, networks, independent producers, and every crew member, artist, and technician who brings productions to life. From the largest franchise to the smallest independent short. From the writers’ room to the sound stage to the edit suite. Every person in this industry has a stake in what happens next.
We are not asking to freeze the world in place. We are asking to protect the people who make the stories that move us, entertain us, challenge us, and define our culture. We are asking for the same common-sense protections that saved this industry from piracy, applied thoughtfully to the next great challenge it faces.
The technology to replace human filmmaking exists. Just like the technology to pirate films existed twenty years ago. And just like then, the answer is not to pretend it will go away. It is to build smart rules around it.
AI Video Generation Examples
Sign this petition. Protect the film industry. Because humans make films.
Protect the Film Industry Campaign
petition@protectthefilmindustry.com | protectthefilmindustry.com
509
The issue
Humans Make Films. AI Should Not.
We call on legislators in the United States, European Union, United Kingdom, and Australia to mandate that all AI video generation companies implement guardrails that prevent their tools from producing commercially viable feature-length films, TV series, or streaming content, while preserving creative, artistic, and short-form use.
AI should empower our industry, not replace it and we have a concrete proposal for how to do it.
Our Proposal
The core idea is simple: Visual Drift
AI video generation should remain a brilliant tool for creating short-form content, including clips, concepts, art, marketing, storyboards, and creative exploration. But the output should be inherently unsuitable for producing a coherent, watchable feature-length film or TV series.
We achieve this not by limiting what people can generate, but by requiring that AI generation tools introduce a small amount of natural variation into their outputs. We call this “visual drift.”
Think of it like a dream. In a dream, you know the person in front of you is supposed to be the same person. The kitchen you’re standing in is supposed to be the same kitchen. But something is always slightly different. The face shifts a little. The room rearranges itself subtly. It’s beautiful and vivid, but it drifts.
That’s exactly what we’re asking AI generation tools to do. Any single clip can look absolutely stunning, as photorealistic and cinematic as the technology allows. But when someone tries to generate the same character across hundreds of separate scenes, or produce a single film-length generation, the tool introduces gentle but persistent drift that makes the result feel like a dream rather than a film.
A 30-second clip? Perfect. A two-minute concept video? Beautiful. A 90-minute feature film or a ten-episode series assembled from hundreds of AI-generated scenes? It drifts. The characters’ faces shift subtly between scenes. The locations rearrange themselves slightly. The performances feel a touch inconsistent. Individually, every scene is stunning. Together, they don’t hold up as cinema or as a series you’d sit through.
AI video generation tools already exhibit natural visual drift. Characters shift between generations. Environments change. Consistency is difficult to maintain. But this is a limitation that every major AI company is actively working to eliminate. Character consistency across scenes is one of the most heavily invested areas of AI video research right now. What we are asking for is straightforward: mandate the drift that currently exists naturally. Lock it in before it's engineered away. This guardrail doesn't require building anything new. It requires preserving something that already exists, and preventing its removal.
The specific thresholds of drift would be determined by a technical standards body in consultation with the AI industry. The principle is a minimum threshold of variation that preserves the quality of individual clips while preventing the consistency that feature-length storytelling requires.
Here are the nine specific mechanisms we propose.
The Nine Mechanisms
Visual Drift Between Separate Generations
1. Character Drift. When someone generates the same character across separate generation requests, the tool must introduce a minimum threshold of variation in the character’s facial features, proportions, and physical appearance. It’s still a similar character, the way someone in a dream is still the same person, but the face drifts slightly. The jawline shifts a touch. The eye spacing changes subtly. A single scene looks perfect. But across fifty scenes, the audience can feel that something isn’t quite right. Not enough consistency to sustain belief over a full film or series.
2. Environment Drift. When someone generates the same location across separate requests, the tool must introduce a minimum threshold of variation in the spatial layout, lighting direction, colour temperature, and architectural details. It’s still a similar kitchen, a similar office, a similar street. But the proportions shift gently. The window that was on the left has moved to a slightly different position. The lighting comes from a subtly different angle. Like a dream, the place is familiar, but it’s never quite the same room twice.
3. Continuity Drift. Between separate generation requests, the tool must introduce subtle variation in wardrobe details, time of day, weather conditions, and ambient elements, even when the user requests continuity. A character wearing a blue jacket in scene one might have a slightly different shade of blue, or a slightly different collar shape, in scene two. The sunny afternoon might shift to a slightly different quality of light. These are the kinds of details that film and TV crews spend enormous effort maintaining with script supervisors and continuity departments. By introducing gentle drift, AI-generated scenes cannot achieve the seamless flow that makes a film or series feel real.
4. Performance Drift. AI-generated performances must exhibit a minimum threshold of variation in facial expression patterns and eye behaviour between separate generation requests. In a real film or TV show, an actor’s emotional vocabulary is deeply consistent. The way they furrow their brow when worried. The way their mouth tightens when angry. The way their eyes hold contact during an intimate conversation. These patterns are what make a performance feel human.
Eye contact is especially critical. The way a character’s gaze tracks another person, the rhythm of their blinks, the subtle shifts in where they look during a conversation. Audiences are extraordinarily sensitive to these patterns, even subconsciously. It’s the primary way we emotionally bond with a character on screen. When those gaze patterns drift between scenes, that emotional bond quietly breaks.
By introducing a minimum threshold of drift in acting, expression and gaze patterns between generations, the character still emotes beautifully in any single scene. Their eyes are still compelling, their expressions still real. But across an entire film or series, the performance feels like it belongs to several slightly different versions of the same person. Convincing in isolation, but never quite forming the singular human presence that storytelling demands.
Progressive Drift Within a Single Generation
As AI advances, it will become possible to generate an entire film or series episode from a single prompt. The mechanisms above wouldn’t prevent this because there’s only one generation. This mechanism ensures that even a single continuous generation cannot produce a watchable result.
5. Progressive Visual Drift Over Duration. AI tools must introduce gently increasing variation in visual parameters proportional to the duration of a single continuous generation. This includes character appearance, environmental details, spatial layout, lighting, and colour grading. The first few minutes look flawless. Beyond approximately 3 to 5 minutes, progressive drift begins to accumulate. By 15 to 20 minutes, the output feels dreamlike. Beautiful, but no longer consistent enough for cinematic or broadcast-quality viewing.
This does not limit generation length. Users can still generate as much content as they like. It simply means that the longer a single generation runs, the more variation accumulates. AI remains a brilliant tool for short-form content while being inherently unsuitable for feature-length or episodic production.
Transparency
6. Invisible Forensic Watermarking. Every frame of AI-generated video must contain an invisible watermark, undetectable to the human eye but readable by verification tools, that identifies the content as AI-generated, which tool produced it, and when. This uses the C2PA standard, an open technical standard already supported by Adobe, Microsoft, and Intel. The European Union’s AI Act is already mandating this by August 2026. We call for wider adoption. The watermark doesn’t affect the viewing experience at all. It simply ensures AI-generated content can always be verified.
7. Cross-Session Output Fingerprint Registry. All AI video generation companies must contribute to a shared, independent industry registry where every generated clip is fingerprinted. This enables detection when someone assembles hundreds of separately generated clips into feature-length or episodic content, even if each individual clip was produced within all the guardrails. The technology already exists at scale: YouTube’s Content ID system processes hundreds of millions of videos using the same principle.
International Enforcement
8. Open-Source Model Guardrails. Open-source AI video generation models should include the same drift guardrails as commercial tools. As the open-source ecosystem grows, it’s important that freely available models don’t become a loophole that undermines the protections applied to commercial services. We call for a framework, developed in collaboration with the open-source AI community, that ensures these protections are applied broadly and fairly without stifling the innovation that benefits everyone.
9. International Coordination. We call for international coordination between the US, EU, UK, and Australia to ensure these guardrails are adopted consistently across borders, preventing companies from bypassing protections by operating from unregulated jurisdictions. When guardrails are adopted by the world’s largest markets in a coordinated way, compliance becomes the default.
We’ve Been Here Before
Twenty years ago, the film industry faced an existential threat: piracy.
Anyone with a camera could walk into a cinema, record a film, and distribute it to millions. Anyone with an internet connection could copy and share entire studio catalogues overnight. The technology to destroy the film industry existed, and it was freely available to everyone.
But piracy didn’t kill cinema. The industry fought back with smart, enforceable rules that made large-scale piracy impractical, risky, and commercially unviable. Not by banning the technology. Not by restricting the internet. But by building targeted regulations around the platforms and tools that enabled it.
We are at exactly the same moment again.
AI video generation technology is advancing at a pace that most people outside the industry don’t fully appreciate. Two years ago, the best AI could produce was a barely coherent clip of Will Smith eating spaghetti. Today, it can produce photorealistic trailers indistinguishable from $100 million studio productions. Tomorrow, it will be able to generate a complete feature-length film or an entire season of television from a single text prompt. And that tomorrow is closer than most people realise.
Imagine a world where anyone can type “generate a thriller like Sicario set in the Australian outback” and receive a fully rendered two-hour film within minutes. A world where streaming platforms can fill their catalogues with AI-generated series without commissioning a single human production. A world where the algorithm doesn’t just recommend what you watch, it creates what you watch, tailored to your viewing history, your preferences, frame by frame.
That world is not science fiction. It is the logical, inevitable endpoint of the technology that exists today if we do nothing.
Who This Targets
This petition does not target film studios, streaming platforms, TV networks, individual filmmakers, artists, or audiences.
It targets approximately 10 to 15 AI video generation companies, including OpenAI (Sora), Google (Veo), Runway, Kling, Pika, Luma, Minimax, Stability AI, and ByteDance (Jimeng). These companies already operate within defined boundaries and content restrictions. We are simply asking for one additional boundary: protections for the film and television industry.
This doesn't stifle innovation. These companies can continue improving every aspect of their technology. We are only asking that one specific capability be constrained: the ability to produce the seamless, consistent output required to replace human-made films and series.
Why Now
The momentum is already building. We are not starting from scratch. We are joining a wave.
In April 2026, the 79th Festival de Cannes ruled that generative AI is ineligible for its official competition, declaring that “a film is not an assembly of data; it is a personal vision.” Over 400 Hollywood creatives have written to the White House opposing AI copyright exemptions.
SAG-AFTRA is negotiating its strongest AI protections yet ahead of the 2026 TV/Theatrical contract. The Creators Coalition on AI, founded by filmmakers including Joseph Gordon-Levitt, Natasha Lyonne, and Daniel Kwan, is building industry-wide standards for ethical AI use.
Legislation is moving. The NO FAKES Act would protect performers’ likenesses from unauthorised AI replication. The TRAIN Act would create transparency around training data. The CLEAR Act would require disclosure of copyrighted works used in AI training. The EU AI Act is mandating content watermarking by August 2026.
But none of these address the fundamental threat: a future where AI tools can generate original films and series with original characters that violate nobody’s copyright and use nobody’s likeness, and still replace human filmmakers entirely.
This petition fills that gap.
What This Means
This is not abstract. The global film, television, and entertainment industry employs millions of people. In the United States alone, the motion picture and television industry supports over 2.5 million jobs.
In Europe, the audiovisual sector employs more than 1.2 million people. Across the UK, Australia, South Korea, India, Nigeria, and every country where stories are told on screen, millions more depend on this industry for their livelihoods. Economists estimate that over 100,000 US jobs in film, television, and animation could be disrupted by generative AI by the end of this decade. And that is just one country.
This touches everyone in the industry. Studios, streamers, networks, independent producers, and every crew member, artist, and technician who brings productions to life. From the largest franchise to the smallest independent short. From the writers’ room to the sound stage to the edit suite. Every person in this industry has a stake in what happens next.
We are not asking to freeze the world in place. We are asking to protect the people who make the stories that move us, entertain us, challenge us, and define our culture. We are asking for the same common-sense protections that saved this industry from piracy, applied thoughtfully to the next great challenge it faces.
The technology to replace human filmmaking exists. Just like the technology to pirate films existed twenty years ago. And just like then, the answer is not to pretend it will go away. It is to build smart rules around it.
AI Video Generation Examples
Sign this petition. Protect the film industry. Because humans make films.
Protect the Film Industry Campaign
petition@protectthefilmindustry.com | protectthefilmindustry.com
509
Supporter voices
Petition Updates
Share this petition
Petition created on 14 May 2026

