Force social media companies to let users turn off algorithm generated feeds


Force social media companies to let users turn off algorithm generated feeds
The Issue
The Algorithmic Transparency and Choice Act
Section 1: Purpose
This legislation seeks to mitigate the societal and political polarization caused by algorithmically generated content feeds on social media platforms. By providing users with the option to disable algorithmic recommendations, this Act promotes greater transparency, user autonomy, and a healthier digital discourse.
Section 2: Mandatory User Control Over Content Feeds
(a) Social media companies operating in the United States, including but not limited to Google (YouTube), Meta (Facebook, Instagram), and X (formerly Twitter), must provide users with a clear, accessible option to disable algorithmically recommended content feeds.
(b) When disabled, users will receive content in chronological order from followed accounts or based on neutral, non-personalized ranking methods.
(c) Platforms must not employ deceptive design practices (i.e., dark patterns) that obscure or discourage users from selecting non-algorithmic feeds.
Section 3: Transparency and Accountability
(a) Platforms must disclose how their algorithms curate content, including the criteria used for recommendations.
(b) An independent oversight body will be established to ensure compliance and investigate complaints of algorithmic manipulation leading to undue political polarization.
Section 4: Enforcement and Penalties
(a) Non-compliance will result in financial penalties, escalating for repeat violations.
(b) Users must have access to report non-compliance, with a response required from the platform within a reasonable timeframe.
Section 5: Effective Date
This Act shall take effect one year after enactment, allowing platforms time to implement necessary technical adjustments.
By restoring user choice and reducing forced algorithmic exposure, this Act safeguards democratic discourse and societal cohesion.
13
The Issue
The Algorithmic Transparency and Choice Act
Section 1: Purpose
This legislation seeks to mitigate the societal and political polarization caused by algorithmically generated content feeds on social media platforms. By providing users with the option to disable algorithmic recommendations, this Act promotes greater transparency, user autonomy, and a healthier digital discourse.
Section 2: Mandatory User Control Over Content Feeds
(a) Social media companies operating in the United States, including but not limited to Google (YouTube), Meta (Facebook, Instagram), and X (formerly Twitter), must provide users with a clear, accessible option to disable algorithmically recommended content feeds.
(b) When disabled, users will receive content in chronological order from followed accounts or based on neutral, non-personalized ranking methods.
(c) Platforms must not employ deceptive design practices (i.e., dark patterns) that obscure or discourage users from selecting non-algorithmic feeds.
Section 3: Transparency and Accountability
(a) Platforms must disclose how their algorithms curate content, including the criteria used for recommendations.
(b) An independent oversight body will be established to ensure compliance and investigate complaints of algorithmic manipulation leading to undue political polarization.
Section 4: Enforcement and Penalties
(a) Non-compliance will result in financial penalties, escalating for repeat violations.
(b) Users must have access to report non-compliance, with a response required from the platform within a reasonable timeframe.
Section 5: Effective Date
This Act shall take effect one year after enactment, allowing platforms time to implement necessary technical adjustments.
By restoring user choice and reducing forced algorithmic exposure, this Act safeguards democratic discourse and societal cohesion.
13
Petition created on March 8, 2025
