Public Digital Access Crisis

AI is locking people out. At Scale.

This is not a minor bug trend. It is a systematic civil-rights failure that has now found its way into software as a whole, through lightning-fast adoption of AI systems that are trained on over 20 years of institutional barriers.

Accessibility is a baseline for participation in modern life. When developers automate UI development, they automate discrimination from rights, services, and opportunity.

Illustration of essential public services and accessibility checks

What is happening right now

Scale

Inaccessible patterns spread faster than they can be reviewed and fixed. There isn't enough accessible training data available, and RL techniques of AI vendors do not cover accessibility well.

Institutional exposure

High-impact services deploy AI code in systems people depend on daily, like education, healthcare, finance, and jobs.

Accountability gap

As in the years before AI, almost nobody talked about digital accessibility. In the public eye, it is seen as an issue purely of the disabled. But: It is also an issue of quality, equality and broader society.

Who must act

Policymakers

Treat accessibility as enforceable law in AI systems: hold vendors accountable, and enforce penalties for failure to comply. If you don't know where to start, contact us! We're happy to help, and we have plenty of experience.

Software Engineers

Refuse to ship discrimination. Block a release on failed accessibility checks, repair the code, and protect users before launch. You can easily implement automated accessibility checkers, and then learn about manual testing.

Disabled users

Your experience is not edge-case feedback. It is frontline evidence. Report barriers and demand public timelines for repair. Hold your instituions accountable.

AI companies

Stop externalizing harm. Raise default accessibility quality by a massive margin, and accept independent scrutiny. Work together with disabled users on this!