Mandate Accessible and Bias-Free AI Under India’s Disability Rights Law

Recent signers:
Jay A and 19 others have signed recently.

The Issue

AI is already deciding who gets education, jobs, healthcare, welfare, and public services.


If it is inaccessible or biased, people with disabilities are excluded by design.

India has over 60 million persons with disabilities. Yet today, artificial intelligence systems used in exams, hiring, healthcare platforms, welfare delivery, and government services routinely fail disabled users — not because of malice, but because accessibility and disability bias are treated as optional.

This is not a future risk. It is happening now.

Why this matters now
In Rajive Raturi v. Union of India, the Supreme Court held that accessibility cannot be left to voluntary guidelines. The Court made it clear that there must be a non-negotiable, enforceable minimum standard, especially for digital systems.

At the same time, government AI policies continue to rely on:

  • voluntary compliance,
  • vague references to “marginalised communities”, and
  • post-harm grievance mechanisms that do not work for automated systems.

This creates a dangerous gap:
AI systems can comply with “ethics” guidelines while still excluding disabled people in practice.

What the evidence shows
The NALSAR Centre for Disability Studies’ report, Finding Sizes for All documents widespread failure of digital accessibility in India due to weak enforcement. AI systems worsen this problem because they are:

  • automated and opaque,
  • deployed at scale, and
  • difficult to challenge after harm occurs.

When AI is inaccessible or biased:

  • disabled students are flagged as cheating,
  • qualified candidates are filtered out of jobs,
  • healthcare and welfare systems misclassify disabled users as “anomalies”.

What we are asking for
We call upon the Supreme Court of India, and the Union of India, to ensure that:

  • Artificial Intelligence systems are explicitly covered under accessibility rules framed under the Rights of Persons with Disabilities Act, 2016;
  • AI systems used in education, employment, healthcare, welfare, and public services are required to be:
    • accessible by design,
    • tested for disability-related bias,
    • subject to enforceable standards, not voluntary promises;India’s
  • AI governance framework aligns with global best practices, including safeguards found in the European Union’s AI Act, which treats disability inclusion as a legal obligation, not an ethical suggestion;
  • Persons with disabilities and their organisations are meaningfully involved in shaping these rules.

This is not anti-technology
Accessible and fair AI is better AI.
It improves accuracy, trust, and usability for everyone.

Innovation that excludes millions is not progress — it is systemic discrimination by code.

Why your signature matters
Courts listen when it is clear that an issue affects not just one petitioner, but the public at large.
Your signature signals that AI accessibility and bias are civil rights issues, not niche technical concerns.

This petition supports a submission already placed before the Supreme Court in a live matter. Public backing can determine whether this issue is treated as peripheral — or urgent.

Sign to demand that AI in India be accessible, fair, and lawful.

More Information

avatar of the starter
Nilesh SingitPetition StarterDisability Rights Activist

88

Recent signers:
Jay A and 19 others have signed recently.

The Issue

AI is already deciding who gets education, jobs, healthcare, welfare, and public services.


If it is inaccessible or biased, people with disabilities are excluded by design.

India has over 60 million persons with disabilities. Yet today, artificial intelligence systems used in exams, hiring, healthcare platforms, welfare delivery, and government services routinely fail disabled users — not because of malice, but because accessibility and disability bias are treated as optional.

This is not a future risk. It is happening now.

Why this matters now
In Rajive Raturi v. Union of India, the Supreme Court held that accessibility cannot be left to voluntary guidelines. The Court made it clear that there must be a non-negotiable, enforceable minimum standard, especially for digital systems.

At the same time, government AI policies continue to rely on:

  • voluntary compliance,
  • vague references to “marginalised communities”, and
  • post-harm grievance mechanisms that do not work for automated systems.

This creates a dangerous gap:
AI systems can comply with “ethics” guidelines while still excluding disabled people in practice.

What the evidence shows
The NALSAR Centre for Disability Studies’ report, Finding Sizes for All documents widespread failure of digital accessibility in India due to weak enforcement. AI systems worsen this problem because they are:

  • automated and opaque,
  • deployed at scale, and
  • difficult to challenge after harm occurs.

When AI is inaccessible or biased:

  • disabled students are flagged as cheating,
  • qualified candidates are filtered out of jobs,
  • healthcare and welfare systems misclassify disabled users as “anomalies”.

What we are asking for
We call upon the Supreme Court of India, and the Union of India, to ensure that:

  • Artificial Intelligence systems are explicitly covered under accessibility rules framed under the Rights of Persons with Disabilities Act, 2016;
  • AI systems used in education, employment, healthcare, welfare, and public services are required to be:
    • accessible by design,
    • tested for disability-related bias,
    • subject to enforceable standards, not voluntary promises;India’s
  • AI governance framework aligns with global best practices, including safeguards found in the European Union’s AI Act, which treats disability inclusion as a legal obligation, not an ethical suggestion;
  • Persons with disabilities and their organisations are meaningfully involved in shaping these rules.

This is not anti-technology
Accessible and fair AI is better AI.
It improves accuracy, trust, and usability for everyone.

Innovation that excludes millions is not progress — it is systemic discrimination by code.

Why your signature matters
Courts listen when it is clear that an issue affects not just one petitioner, but the public at large.
Your signature signals that AI accessibility and bias are civil rights issues, not niche technical concerns.

This petition supports a submission already placed before the Supreme Court in a live matter. Public backing can determine whether this issue is treated as peripheral — or urgent.

Sign to demand that AI in India be accessible, fair, and lawful.

More Information

avatar of the starter
Nilesh SingitPetition StarterDisability Rights Activist

Petition Updates