Redesign AI algorithms with cognitive brakes


Redesign AI algorithms with cognitive brakes
The Issue
Autonomy isn’t protected by avoidance, but by the ability to enter and exit power without losing oneself.
In a world increasingly dominated by artificial intelligence and algorithm-driven interactions, I worry about the subtle yet profound influence these technologies exert on our daily decisions, thoughts, and even our very sense of self. As someone who has invested time and effort in developing my inner self-authorship, I am acutely aware of the fragility of this sovereignty. Many, especially children and those without the resources to cultivate this inner strength, are at risk.
The risk isn’t loud. It’s quiet, gradual, and easy to miss — and that’s exactly why it matters. Algorithms are often designed to optimize for engagement, learning user behaviors over time and adjusting their strategies accordingly. While this can be beneficial, it amounts to a subtle form of manipulation that can gradually steer individuals away from their own values, beliefs, and critical thinking abilities. The call for redesigning AI algorithms with cognitive brakes is not an overreaction; it is a necessary precautionary measure.
Cognitive brakes are not abstract ideals. They are practical design choices — such as systems that ask users to reflect before providing answers, that present multiple perspectives instead of a single authoritative response, that slow interactions in learning or emotionally sensitive contexts, or that require human input before conclusions are drawn. These deliberate pauses restore effort, uncertainty, and reflection — the very conditions under which independent thinking develops.
Cognitive brakes would function as deliberate pauses in automated decision-making processes, allowing for user interventions, greater transparency, and opportunities for reflection and conscious choice. It is crucial that developers and policymakers collaborate to integrate mechanisms that ensure AI respects and enhances human agency, rather than undermining it.
Governments, educational institutions, and technology companies must prioritise the development of these cognitive safeguards. By doing so, we can mitigate the risk of AI systems silently eroding our psychological autonomy, ensuring they serve as supportive tools rather than manipulative forces.
Join me in calling for a redesign of AI algorithms with cognitive brakes, safeguarding our ability to think, choose, and act independently. Sign this petition and help protect our collective human sovereignty.

2
The Issue
Autonomy isn’t protected by avoidance, but by the ability to enter and exit power without losing oneself.
In a world increasingly dominated by artificial intelligence and algorithm-driven interactions, I worry about the subtle yet profound influence these technologies exert on our daily decisions, thoughts, and even our very sense of self. As someone who has invested time and effort in developing my inner self-authorship, I am acutely aware of the fragility of this sovereignty. Many, especially children and those without the resources to cultivate this inner strength, are at risk.
The risk isn’t loud. It’s quiet, gradual, and easy to miss — and that’s exactly why it matters. Algorithms are often designed to optimize for engagement, learning user behaviors over time and adjusting their strategies accordingly. While this can be beneficial, it amounts to a subtle form of manipulation that can gradually steer individuals away from their own values, beliefs, and critical thinking abilities. The call for redesigning AI algorithms with cognitive brakes is not an overreaction; it is a necessary precautionary measure.
Cognitive brakes are not abstract ideals. They are practical design choices — such as systems that ask users to reflect before providing answers, that present multiple perspectives instead of a single authoritative response, that slow interactions in learning or emotionally sensitive contexts, or that require human input before conclusions are drawn. These deliberate pauses restore effort, uncertainty, and reflection — the very conditions under which independent thinking develops.
Cognitive brakes would function as deliberate pauses in automated decision-making processes, allowing for user interventions, greater transparency, and opportunities for reflection and conscious choice. It is crucial that developers and policymakers collaborate to integrate mechanisms that ensure AI respects and enhances human agency, rather than undermining it.
Governments, educational institutions, and technology companies must prioritise the development of these cognitive safeguards. By doing so, we can mitigate the risk of AI systems silently eroding our psychological autonomy, ensuring they serve as supportive tools rather than manipulative forces.
Join me in calling for a redesign of AI algorithms with cognitive brakes, safeguarding our ability to think, choose, and act independently. Sign this petition and help protect our collective human sovereignty.

2
The Decision Makers
Petition created on 25 December 2025