Adobe Used Creator Work to Train AI Without Consent — Creators Deserve Better


Adobe Used Creator Work to Train AI Without Consent — Creators Deserve Better
The Issue
My Story
I started Diversity Photos in 2016 because I was tired of seeing people like me misrepresented - or not represented at all - across stock photography and media platforms.
I come from a technical background. I have a degree in Electrical Engineering, an MBA focused on entrepreneurship, and I spent years as a cybersecurity engineer protecting critical infrastructure. Photography became my creative outlet, but it also became something bigger: a mission.
Over the years, I’ve worked with more than 5,000 people of color and created over 100,000 images and videos. Real people. Everyday families. Veterans. Professionals. Parents. Children. All captured with dignity, intention, and consent so that media and technology could reflect the world as it actually is.
Diversity Photos was never just a content library. It was about positive representation and economic mobility for communities that have historically been excluded, exploited, or erased.
Why I Chose to Work With Adobe
In 2018, Adobe recruited us because of our unique, diverse catalog. They talked about collaboration, shared values, and being part of a small, premium group whose content would be highlighted and promoted. It felt like a partnership rooted in inclusion and mutual benefit.
The agreement was take-it-or-leave-it. I had no ability to negotiate the terms. Adobe explained that the license was necessary so they could host, market, and sell our images to their customers. That made sense. In 2018, generative AI wasn’t commercially viable. No one was talking about AI models replacing creative work.
If I had known that this same agreement would later be used to justify training AI systems that directly compete with my business, I never would have signed it. Never.
What I’m Asking For
I’m asking for something simple and necessary for all creatives:
- Transparency about when and how creative work is used to train AI
- Meaningful consent, separate from basic platform participation
- Fair compensation when creative work is used to create lasting AI value, based on current data-licensing market rates
- Updated legal standards that reflect the realities of modern technology
Here’s why.
What Changed
Around 2020, I began working with AI myself - using models to label images and improve workflows. That’s when I saw something disturbing.
AI systems labeled beautiful Black women as "ugly." They labeled Black men as "gorillas." They labeled groups of Black people with deeply racist terms.
At the same time, I learned about medical AI systems that couldn’t reliably detect skin cancer on darker skin tones. The problem wasn’t the models - it was the data.
Our content mattered not just for representation, but for responsible AI. We began offering licensed evaluation datasets - carefully controlled, consent-based datasets used to evaluate AI models for bias and performance, not for training. In machine learning, once data is used for training, it permanently loses its value as an independent evaluation dataset. Because of this, our content was never meant to be used for training under any circumstances.
Discovering the Use of Our Images
In 2023, Adobe released Firefly, its AI image generation model, positioning itself as the "ethical AI company" whose models were trained responsibly.
I asked: Were our images used?
Beginning in June 2023, I repeatedly contacted my Adobe partners asking to exclude our content from AI training and to establish a proper data-licensing arrangement. I genuinely believed this would be resolved collaboratively. Instead, I was strung along for months.
Eventually, Adobe admitted - multiple times - that our images had been used to train Firefly. They sent a "bonus" payment of about $1,100 for nearly 12,000 images - less than ten cents per image.
I refused the payment. I did not consent to the use, and it was not fair or equitable.
Adobe later suggested $5,000. Then $30,000 - based on the idea that training only took a few months. But AI training doesn’t work that way. Once a model is trained, the data is baked in permanently. Based on comparable data-licensing rates in the industry, a full buyout perpetual license for a dataset of this nature could easily reach $2,000,000.
When I Tried to Seek Accountability
When I formally challenged Adobe’s use of our content, I hit a wall of forced arbitration. Before any discovery could take place - before a single document was exchanged - Adobe moved to dismiss nearly all claims. Costs reached $24,000 just to proceed, with estimates exceeding $100,000. The very contractual clause meant to protect creators like me if arbitration became unaffordable became the mechanism that effectively silenced me.
Even more alarming: our images had been made publicly available, unwatermarked and unprotected, through an Adobe-owned domain - allowing third-party AI companies to download and use them freely.
Why This Matters Beyond My Story
This isn’t about being anti-AI. I believe in AI. I work with Responsible AI teams. I want technology to work for everyone.
But consent matters. Context matters. Fairness matters.
What’s happening right now goes far beyond scraping public data. Platforms are using Terms of Service written in a pre-AI era to justify AI training on user-submitted content - without renewed consent, without meaningful choice, and without fair compensation. Arbitration clauses then make those decisions nearly impossible to challenge.
Creators carry the risk. Platforms capture the upside. And precedent gets set quietly.
Why Your Signature Matters
If you’ve ever uploaded your work believing it would be used one way - only to later learn it was used another - this affects you too.
We are just in time to fix this. Before outdated contracts and quiet precedent decide the future of creative labor forever.
Please sign and share.

60
The Issue
My Story
I started Diversity Photos in 2016 because I was tired of seeing people like me misrepresented - or not represented at all - across stock photography and media platforms.
I come from a technical background. I have a degree in Electrical Engineering, an MBA focused on entrepreneurship, and I spent years as a cybersecurity engineer protecting critical infrastructure. Photography became my creative outlet, but it also became something bigger: a mission.
Over the years, I’ve worked with more than 5,000 people of color and created over 100,000 images and videos. Real people. Everyday families. Veterans. Professionals. Parents. Children. All captured with dignity, intention, and consent so that media and technology could reflect the world as it actually is.
Diversity Photos was never just a content library. It was about positive representation and economic mobility for communities that have historically been excluded, exploited, or erased.
Why I Chose to Work With Adobe
In 2018, Adobe recruited us because of our unique, diverse catalog. They talked about collaboration, shared values, and being part of a small, premium group whose content would be highlighted and promoted. It felt like a partnership rooted in inclusion and mutual benefit.
The agreement was take-it-or-leave-it. I had no ability to negotiate the terms. Adobe explained that the license was necessary so they could host, market, and sell our images to their customers. That made sense. In 2018, generative AI wasn’t commercially viable. No one was talking about AI models replacing creative work.
If I had known that this same agreement would later be used to justify training AI systems that directly compete with my business, I never would have signed it. Never.
What I’m Asking For
I’m asking for something simple and necessary for all creatives:
- Transparency about when and how creative work is used to train AI
- Meaningful consent, separate from basic platform participation
- Fair compensation when creative work is used to create lasting AI value, based on current data-licensing market rates
- Updated legal standards that reflect the realities of modern technology
Here’s why.
What Changed
Around 2020, I began working with AI myself - using models to label images and improve workflows. That’s when I saw something disturbing.
AI systems labeled beautiful Black women as "ugly." They labeled Black men as "gorillas." They labeled groups of Black people with deeply racist terms.
At the same time, I learned about medical AI systems that couldn’t reliably detect skin cancer on darker skin tones. The problem wasn’t the models - it was the data.
Our content mattered not just for representation, but for responsible AI. We began offering licensed evaluation datasets - carefully controlled, consent-based datasets used to evaluate AI models for bias and performance, not for training. In machine learning, once data is used for training, it permanently loses its value as an independent evaluation dataset. Because of this, our content was never meant to be used for training under any circumstances.
Discovering the Use of Our Images
In 2023, Adobe released Firefly, its AI image generation model, positioning itself as the "ethical AI company" whose models were trained responsibly.
I asked: Were our images used?
Beginning in June 2023, I repeatedly contacted my Adobe partners asking to exclude our content from AI training and to establish a proper data-licensing arrangement. I genuinely believed this would be resolved collaboratively. Instead, I was strung along for months.
Eventually, Adobe admitted - multiple times - that our images had been used to train Firefly. They sent a "bonus" payment of about $1,100 for nearly 12,000 images - less than ten cents per image.
I refused the payment. I did not consent to the use, and it was not fair or equitable.
Adobe later suggested $5,000. Then $30,000 - based on the idea that training only took a few months. But AI training doesn’t work that way. Once a model is trained, the data is baked in permanently. Based on comparable data-licensing rates in the industry, a full buyout perpetual license for a dataset of this nature could easily reach $2,000,000.
When I Tried to Seek Accountability
When I formally challenged Adobe’s use of our content, I hit a wall of forced arbitration. Before any discovery could take place - before a single document was exchanged - Adobe moved to dismiss nearly all claims. Costs reached $24,000 just to proceed, with estimates exceeding $100,000. The very contractual clause meant to protect creators like me if arbitration became unaffordable became the mechanism that effectively silenced me.
Even more alarming: our images had been made publicly available, unwatermarked and unprotected, through an Adobe-owned domain - allowing third-party AI companies to download and use them freely.
Why This Matters Beyond My Story
This isn’t about being anti-AI. I believe in AI. I work with Responsible AI teams. I want technology to work for everyone.
But consent matters. Context matters. Fairness matters.
What’s happening right now goes far beyond scraping public data. Platforms are using Terms of Service written in a pre-AI era to justify AI training on user-submitted content - without renewed consent, without meaningful choice, and without fair compensation. Arbitration clauses then make those decisions nearly impossible to challenge.
Creators carry the risk. Platforms capture the upside. And precedent gets set quietly.
Why Your Signature Matters
If you’ve ever uploaded your work believing it would be used one way - only to later learn it was used another - this affects you too.
We are just in time to fix this. Before outdated contracts and quiet precedent decide the future of creative labor forever.
Please sign and share.

60
The Decision Makers
Petition created on January 30, 2026