Call for Algorithm Change in Instagram Feed to Prevent Harmful Content Loops

The Issue

To: Instagram, Ministry of Electronics and Information Technology, and Ministry of Health and Family Welfare, Government of India

Subject: To Revise Instagram's Content Recommendation Algorithm and Content Policies to Protect Mental Health

Dear Instagram and Esteemed Authorities,

In recent times, while scrolling through Instagram Reels, I encountered content that, intentionally or unintentionally, could be highly triggering for individuals struggling with suicidal thoughts and other mental health issues. Despite voicing concerns by commenting on such posts, I observed a pattern where every few reels included content with heavy messages, disturbing graphics, or melancholic songs that could overwhelm vulnerable users emotionally. 

This issue is not just about a single instance but points to a broader, systemic problem with Instagram's content algorithms and policies. While Instagram has policies against explicit suicidal or violent content, it appears that content which falls into a grey area- these posts do not violate the policies graphically but subtly promoting despair or hopelessness- still makes its way into users' feeds. Such exposure can be extremely detrimental, particularly for those with fragile mental health.

The Issues:

  1. Instagram's algorithm may inadvertently create a feedback loop of potentially harmful content for vulnerable users.
  2. When users interact with content related to emotional distress or mental health struggles, the algorithm appears to show more similar content, potentially exposing vulnerable individuals to a higher concentration of triggering material.
  3. This algorithmic behaviour can lead to a significant increase in potentially harmful content in the Reels feed of users who may already be struggling with mental health issues.
  4. The content in question often contains subtle messaging or emotional themes that, while not graphically violent or explicitly against current policies, can still be deeply affecting for users with mental health vulnerabilities.
  5. Existing content policies and algorithmic safeguards do not adequately address this "grey area" content that can significantly impact users' mental well-being.
  6. Users may find it challenging to break out of this cycle of content once the algorithm begins to favour these themes in their feed.

The Impact:

  1. Users with existing mental health challenges may find themselves trapped in a feedback loop of increasingly distressing content, potentially exacerbating their condition.
  2. The algorithm's tendency to show similar content based on user interactions may lead to an echo chamber effect, where vulnerable individuals are disproportionately exposed to potentially triggering material.
  3. Individuals already struggling with mental health issues, particularly those with suicidal thoughts, may experience heightened emotional distress due to repeated exposure to thematically similar content.
  4. The subtle nature of some of this content may bypass conscious defences, affecting users on a subconscious level and potentially influencing their emotional state and behaviour.
  5. Continuous exposure to content depicting extreme emotional states or subtle references to self-harm may inadvertently normalise these experiences, especially for younger or more impressionable users.
  6. This normalization could potentially lower the threshold for considering self-harm or suicidal actions as viable responses to emotional pain.
  7. Once the algorithm begins favouring emotionally charged or potentially harmful content for a user, it may become challenging for that individual to break free from this cycle of content exposure.
  8. This difficulty in disengaging could lead to prolonged exposure to triggering material, even when the user attempts to seek out more positive content.
  9. For users undergoing treatment or recovery for mental health issues, the constant influx of potentially triggering content may impede their progress and undermine therapeutic efforts.
  10. The algorithmic amplification of content related to their struggles may make it harder for recovering individuals to maintain a healthy mental environment online.
  11. The algorithmic spread of content that subtly references or glorifies harmful behaviours could contribute to a wider cultural normalisation of mental health struggles without proper context or support.
  12. This may indirectly affect even those users who do not directly interact with such content, by shaping societal perceptions and discussions around mental health.

We Call For:

Instagram to:

  1. Conduct a comprehensive review and revision of its content policies, specifically addressing the "grey area" content related to mental health and suicidal ideation that, while not explicitly violent, can be psychologically harmful.
  2. Implement algorithmic changes that reduce the frequency of potentially triggering content in users' feeds, especially for those who have interacted with similar content.
  3. Introduce "circuit breakers" that prevent continuous streams of emotionally charged or potentially harmful content.
  4. Diversify content recommendations to ensure a balanced and healthy content diet for all users.
  5. Develop and deploy an AI-driven content moderation system specifically trained to identify subtle references to self-harm, suicidal ideation, and severe emotional distress in various Indian languages and cultural contexts.
  6. Actively promote mental health resources and support helplines, especially in response to any flagged content that may have slipped through the cracks.
  7. Provide regular public reports on the platform's efforts to address mental health-related content issues, including statistics on content removal, policy updates, and the effectiveness of implemented measures.

The Ministry of Electronics and Information Technology to:

  1. Draft and publish updated guidelines for social media platforms operating in India, with specific provisions addressing the algorithmic amplification of potentially harmful content related to mental health.
  2. Establish a joint task force with the Ministry of Health and Family Welfare to oversee the implementation of these guidelines and ensure they align with mental health best practices.
  3. Mandate that all social media platforms operating in India conduct regular algorithmic audits, with results to be submitted to the Ministry, to ensure compliance with the new guidelines.
  4. Implement a penalty system for non-compliance, with fines proportional to the platform's Indian user base and revenues.

The Ministry of Health and Family Welfare to:

  1. Develop comprehensive guidelines, in consultation with mental health experts, for the responsible portrayal of mental health issues on social media platforms.
  2. Launch a nation-wide digital literacy campaign focusing on the potential impacts of social media on mental health and providing strategies for healthy online engagement.
  3. Establish a 24/7 mental health helpline specifically for individuals affected by harmful online content.

Both Ministries to jointly:

  1. Establish an Inter-Ministerial Committee to oversee the implementation of these measures and facilitate coordination between government bodies, mental health professionals, and technology companies.
  2. Organise regular public hearings where citizens, mental health advocates, and technology experts can provide input on the effectiveness of implemented measures and suggest improvements.

It is crucial to recognize the impact social media has on mental health and to take proactive steps in safeguarding vulnerable individuals from exposure to harmful content. By signing this petition, you are supporting a safer, more responsible social media environment that prioritises the mental well-being of its users. We urge Instagram and the relevant government authorities to take swift action on this critical issue.

avatar of the starter
Saurav ShrivastavaPetition Starter

54

The Issue

To: Instagram, Ministry of Electronics and Information Technology, and Ministry of Health and Family Welfare, Government of India

Subject: To Revise Instagram's Content Recommendation Algorithm and Content Policies to Protect Mental Health

Dear Instagram and Esteemed Authorities,

In recent times, while scrolling through Instagram Reels, I encountered content that, intentionally or unintentionally, could be highly triggering for individuals struggling with suicidal thoughts and other mental health issues. Despite voicing concerns by commenting on such posts, I observed a pattern where every few reels included content with heavy messages, disturbing graphics, or melancholic songs that could overwhelm vulnerable users emotionally. 

This issue is not just about a single instance but points to a broader, systemic problem with Instagram's content algorithms and policies. While Instagram has policies against explicit suicidal or violent content, it appears that content which falls into a grey area- these posts do not violate the policies graphically but subtly promoting despair or hopelessness- still makes its way into users' feeds. Such exposure can be extremely detrimental, particularly for those with fragile mental health.

The Issues:

  1. Instagram's algorithm may inadvertently create a feedback loop of potentially harmful content for vulnerable users.
  2. When users interact with content related to emotional distress or mental health struggles, the algorithm appears to show more similar content, potentially exposing vulnerable individuals to a higher concentration of triggering material.
  3. This algorithmic behaviour can lead to a significant increase in potentially harmful content in the Reels feed of users who may already be struggling with mental health issues.
  4. The content in question often contains subtle messaging or emotional themes that, while not graphically violent or explicitly against current policies, can still be deeply affecting for users with mental health vulnerabilities.
  5. Existing content policies and algorithmic safeguards do not adequately address this "grey area" content that can significantly impact users' mental well-being.
  6. Users may find it challenging to break out of this cycle of content once the algorithm begins to favour these themes in their feed.

The Impact:

  1. Users with existing mental health challenges may find themselves trapped in a feedback loop of increasingly distressing content, potentially exacerbating their condition.
  2. The algorithm's tendency to show similar content based on user interactions may lead to an echo chamber effect, where vulnerable individuals are disproportionately exposed to potentially triggering material.
  3. Individuals already struggling with mental health issues, particularly those with suicidal thoughts, may experience heightened emotional distress due to repeated exposure to thematically similar content.
  4. The subtle nature of some of this content may bypass conscious defences, affecting users on a subconscious level and potentially influencing their emotional state and behaviour.
  5. Continuous exposure to content depicting extreme emotional states or subtle references to self-harm may inadvertently normalise these experiences, especially for younger or more impressionable users.
  6. This normalization could potentially lower the threshold for considering self-harm or suicidal actions as viable responses to emotional pain.
  7. Once the algorithm begins favouring emotionally charged or potentially harmful content for a user, it may become challenging for that individual to break free from this cycle of content exposure.
  8. This difficulty in disengaging could lead to prolonged exposure to triggering material, even when the user attempts to seek out more positive content.
  9. For users undergoing treatment or recovery for mental health issues, the constant influx of potentially triggering content may impede their progress and undermine therapeutic efforts.
  10. The algorithmic amplification of content related to their struggles may make it harder for recovering individuals to maintain a healthy mental environment online.
  11. The algorithmic spread of content that subtly references or glorifies harmful behaviours could contribute to a wider cultural normalisation of mental health struggles without proper context or support.
  12. This may indirectly affect even those users who do not directly interact with such content, by shaping societal perceptions and discussions around mental health.

We Call For:

Instagram to:

  1. Conduct a comprehensive review and revision of its content policies, specifically addressing the "grey area" content related to mental health and suicidal ideation that, while not explicitly violent, can be psychologically harmful.
  2. Implement algorithmic changes that reduce the frequency of potentially triggering content in users' feeds, especially for those who have interacted with similar content.
  3. Introduce "circuit breakers" that prevent continuous streams of emotionally charged or potentially harmful content.
  4. Diversify content recommendations to ensure a balanced and healthy content diet for all users.
  5. Develop and deploy an AI-driven content moderation system specifically trained to identify subtle references to self-harm, suicidal ideation, and severe emotional distress in various Indian languages and cultural contexts.
  6. Actively promote mental health resources and support helplines, especially in response to any flagged content that may have slipped through the cracks.
  7. Provide regular public reports on the platform's efforts to address mental health-related content issues, including statistics on content removal, policy updates, and the effectiveness of implemented measures.

The Ministry of Electronics and Information Technology to:

  1. Draft and publish updated guidelines for social media platforms operating in India, with specific provisions addressing the algorithmic amplification of potentially harmful content related to mental health.
  2. Establish a joint task force with the Ministry of Health and Family Welfare to oversee the implementation of these guidelines and ensure they align with mental health best practices.
  3. Mandate that all social media platforms operating in India conduct regular algorithmic audits, with results to be submitted to the Ministry, to ensure compliance with the new guidelines.
  4. Implement a penalty system for non-compliance, with fines proportional to the platform's Indian user base and revenues.

The Ministry of Health and Family Welfare to:

  1. Develop comprehensive guidelines, in consultation with mental health experts, for the responsible portrayal of mental health issues on social media platforms.
  2. Launch a nation-wide digital literacy campaign focusing on the potential impacts of social media on mental health and providing strategies for healthy online engagement.
  3. Establish a 24/7 mental health helpline specifically for individuals affected by harmful online content.

Both Ministries to jointly:

  1. Establish an Inter-Ministerial Committee to oversee the implementation of these measures and facilitate coordination between government bodies, mental health professionals, and technology companies.
  2. Organise regular public hearings where citizens, mental health advocates, and technology experts can provide input on the effectiveness of implemented measures and suggest improvements.

It is crucial to recognize the impact social media has on mental health and to take proactive steps in safeguarding vulnerable individuals from exposure to harmful content. By signing this petition, you are supporting a safer, more responsible social media environment that prioritises the mental well-being of its users. We urge Instagram and the relevant government authorities to take swift action on this critical issue.

avatar of the starter
Saurav ShrivastavaPetition Starter

The Decision Makers

Mark Zuckerberg
Founder and CEO at Facebook
Adam Mosseri
head of instagram
Shree Ashwini Vaishnaw
Shree Ashwini Vaishnaw
Cabinet Minister, Ministry of Electronics and Information Technology, Govt of India
Shree J.P. Nadda
Shree J.P. Nadda
Cabinet Minister, Ministry of Health and Family Welfare, Govt of India
Petition updates