You are being Recorded without Consent. You don’t get a Choice. This needs to Change.


You are being Recorded without Consent. You don’t get a Choice. This needs to Change.
The Issue
There was a time when being out in public meant exactly that, being in public. You might be seen, perhaps noticed, occasionally remembered. Now, it means something else entirely. It means you are being recorded. Not occasionally. Not deliberately. Constantly.
Your face is captured, uploaded, processed, and more often than not, turned into content by platforms, AI systems, and complete strangers.
And the most remarkable part? No one asks!
No one pauses to say, “Do you mind?”
No one considers whether you might prefer not to be filmed, analysed, or shared.
You are visible and apparently, that’s now enough, except it isn’t. Visibility has never meant consent, it never will and yet, here we are - living in a world where there is no simple, universally recognised way to say: Do Not Record Me.
So Instead of Complaining, We Built Something
Because waiting for large technology companies to voluntarily give people more control over their data is a bit like waiting for a fox to install better locks on a henhouse. Unlikely.
So the Do Not Record Me (DNRM) movement decided to do something far more practical: build a system that makes non-consent visible, detectable, and enforceable.
It starts with something simple. Clothing.
Not fashion for the sake of fashion, but a clear, unmistakable signal. Do Not Record Me is a message that people can understand instantly in the real world, and more importantly, one that technology can recognise.
Because this isn’t just about asking nicely, we built an open-source computer vision system that detects this signal and blurs the wearer’s face in real time. Not in theory. Not in a research paper. In practice.
It works and you can see it here:
👉 View the open-source code: https://github.com/donotrecordme/DNRM_Privacy
👉 Watch the system in action: https://youtu.be/xa3heaouppk
And Before You Ask - No, This Doesn’t Protect Criminals
This is usually the point where someone raises a hand and says, “Ah, but what if someone commits a crime while wearing one of these?”
A fair question. And one we’ve already answered.
When a DNRM signal is detected, the system is designed to encrypt the original image at the moment it is captured, while the blurred version is what people see. Crucially, the original footage is not stored by DNRM, it remains exactly where it was recorded.
What changes is access.
We are building a blockchain protocol where no single person, company, platform, or government can simply decide to unblur a face. Instead, access to the original, unredacted image would require a valid legal warrant, independent multi-party approval, and transparent, auditable authorisation.
In other words, it becomes possible to recover evidence in serious cases but impossible to quietly harvest, scrape, or misuse people’s faces at scale. Mass surveillance becomes difficult and targeted, lawful investigation remains possible, which is precisely how it should be.
👉 You can explore the protocol here: https://github.com/donotrecordme/DNRM_Protocol
The Point Is Not Radical
Despite how this might sound, the idea itself is remarkably simple. If someone does not want to be recorded, they should have a clear way to say so and that signal should be respected - not ignored, overridden, or quietly monetised.
Right now, that doesn’t exist. But it could.
What Needs to Happen Next
For this to work at scale, the technology industry needs to do what it does best: adopt a standard.
Platforms should recognise visual privacy signals and apply protection automatically. Device manufacturers should support real-time anonymisation at the point of capture. Not as an optional feature buried in settings but as a recognised, consistent behaviour.
Because this isn’t about restricting technology. It’s about restoring a very basic idea: That people should have a say in how their image is used.
So Here’s the Ask
If you believe that being visible should not automatically make you content… If you believe that consent should mean something… Then support the recognition of visual privacy signals as a standard.
Sign the petition and help make visual consent the default, not the exception.
No Consent. No Recording.
Important: Change.org may prompt for a donation after signing. This is optional and supports their platform, not this campaign.

75
The Issue
There was a time when being out in public meant exactly that, being in public. You might be seen, perhaps noticed, occasionally remembered. Now, it means something else entirely. It means you are being recorded. Not occasionally. Not deliberately. Constantly.
Your face is captured, uploaded, processed, and more often than not, turned into content by platforms, AI systems, and complete strangers.
And the most remarkable part? No one asks!
No one pauses to say, “Do you mind?”
No one considers whether you might prefer not to be filmed, analysed, or shared.
You are visible and apparently, that’s now enough, except it isn’t. Visibility has never meant consent, it never will and yet, here we are - living in a world where there is no simple, universally recognised way to say: Do Not Record Me.
So Instead of Complaining, We Built Something
Because waiting for large technology companies to voluntarily give people more control over their data is a bit like waiting for a fox to install better locks on a henhouse. Unlikely.
So the Do Not Record Me (DNRM) movement decided to do something far more practical: build a system that makes non-consent visible, detectable, and enforceable.
It starts with something simple. Clothing.
Not fashion for the sake of fashion, but a clear, unmistakable signal. Do Not Record Me is a message that people can understand instantly in the real world, and more importantly, one that technology can recognise.
Because this isn’t just about asking nicely, we built an open-source computer vision system that detects this signal and blurs the wearer’s face in real time. Not in theory. Not in a research paper. In practice.
It works and you can see it here:
👉 View the open-source code: https://github.com/donotrecordme/DNRM_Privacy
👉 Watch the system in action: https://youtu.be/xa3heaouppk
And Before You Ask - No, This Doesn’t Protect Criminals
This is usually the point where someone raises a hand and says, “Ah, but what if someone commits a crime while wearing one of these?”
A fair question. And one we’ve already answered.
When a DNRM signal is detected, the system is designed to encrypt the original image at the moment it is captured, while the blurred version is what people see. Crucially, the original footage is not stored by DNRM, it remains exactly where it was recorded.
What changes is access.
We are building a blockchain protocol where no single person, company, platform, or government can simply decide to unblur a face. Instead, access to the original, unredacted image would require a valid legal warrant, independent multi-party approval, and transparent, auditable authorisation.
In other words, it becomes possible to recover evidence in serious cases but impossible to quietly harvest, scrape, or misuse people’s faces at scale. Mass surveillance becomes difficult and targeted, lawful investigation remains possible, which is precisely how it should be.
👉 You can explore the protocol here: https://github.com/donotrecordme/DNRM_Protocol
The Point Is Not Radical
Despite how this might sound, the idea itself is remarkably simple. If someone does not want to be recorded, they should have a clear way to say so and that signal should be respected - not ignored, overridden, or quietly monetised.
Right now, that doesn’t exist. But it could.
What Needs to Happen Next
For this to work at scale, the technology industry needs to do what it does best: adopt a standard.
Platforms should recognise visual privacy signals and apply protection automatically. Device manufacturers should support real-time anonymisation at the point of capture. Not as an optional feature buried in settings but as a recognised, consistent behaviour.
Because this isn’t about restricting technology. It’s about restoring a very basic idea: That people should have a say in how their image is used.
So Here’s the Ask
If you believe that being visible should not automatically make you content… If you believe that consent should mean something… Then support the recognition of visual privacy signals as a standard.
Sign the petition and help make visual consent the default, not the exception.
No Consent. No Recording.
Important: Change.org may prompt for a donation after signing. This is optional and supports their platform, not this campaign.

75
Share this petition
Petition created on 22 March 2026
