Taeke Reijenga has extensive experience with the business side of Web Accessibility. As CEO of the full service digital agency Level Level, he has managed in a short amount of time to get his entire team on board when it comes to including web accessibility in their workflow.
Artificial Intelligence (AI) is changing how people interact with technology, including tools that make websites and apps more accessible. From automatically adding descriptions to images to converting speech into text in real-time, AI offers some exciting possibilities for making the internet work better for everyone.
But let’s be realistic – AI isn’t a magic solution for accessibility. While it can help with tasks like generating captions or reading text aloud, it sometimes makes mistakes or misses important context that human experts would catch. There are also valid concerns about privacy and bias in AI systems.
That’s why we need to look carefully at both the benefits and limitations of AI in accessibility. What can these tools actually do well? Where do they fall short? And most importantly, how can we make sure AI helps rather than hinders people with disabilities?
To answer these questions and more, we’ll explore some real AI accessibility tools, examine their practical uses, and look at the challenges they present. We’ll help you understand where AI fits into your accessibility toolkit – and where human expertise remains invaluable.
AI in web accessibility: current landscape and ethical challenges
AI tools are becoming more common in web accessibility, with some interesting applications already in use. Let’s look at what’s happening and what we need to watch out for.
Many websites now use AI to add alternative text to images automatically. While this can help with basic image descriptions, the results aren’t always reliable – AI might describe a “smiling person” but miss that they’re giving an important presentation or demonstrating a product.
There have also been a lot of developments with voice assistants and real-time captioning:
- Microsoft’s Seeing AI helps people who are blind or have low vision by reading text, identifying objects, and describing scenes.
- The Be My Eyes app, on the other hand, connects users with both AI assistance and human volunteers to help with visual tasks.
- You also have the Hand Talk app, which provides automatic translations from English to ASL, and the XanderGlasses, which caption in-person conversations and show them to the person wearing the glasses.
But these AI tools raise important questions:
Since AI systems learn from data, isn’t there a chance of bias if that data doesn’t include diverse voices and experiences? After all, we don’t want the tools to work better for some people than others. For example, speech recognition often struggles with accents or speech differences.
Privacy is another concern. AI needs lots of data to work well, but this can include sensitive information about people’s disabilities or needs. How do we balance better accessibility with protecting personal information?
We also need to think about who’s building these tools. Microsoft’s Responsible AI guidelines emphasise including people with disabilities in development and testing – because who better to say if an accessibility tool actually helps? However, that’s not always the case, so you need to be very careful when choosing an AI accessibility tool – was it built to make life better for people with disabilities or solely for profit?
Innovative AI solutions enhancing digital accessibility
Next-generation computer vision
Computer vision AI has come a long way from simple object detection. Modern AI systems can now understand complex scenes and provide detailed descriptions of what’s happening in images or the real world around us.
Take Google’s Lookout app, for instance. It doesn’t just tell users “there’s a chair” – it can describe where objects are in relation to each other, read text from signs or packages, and even help with navigation.
Microsoft’s Seeing AI goes further by recognising faces and expressing emotions, helping users understand social cues.
But these tools aren’t perfect. They sometimes miss important details or misinterpret scenes. They also need users to share their camera feed, which raises privacy questions. Plus, many of these tools require steady internet connections and newer smartphones, which not everyone has access to.
Advanced speech recognition and natural language processing
Speech recognition AI is getting better at understanding different ways people speak. This is particularly helpful for people who might find typing difficult or impossible.
Google’s Project Relate shows how AI can adapt to different speech patterns. It helps people with speech impairments communicate more effectively with both devices and other people. The AI learns to understand unique speech patterns and can either clarify speech for listeners or convert it to text.
However, these systems still struggle with background noise, miss non-verbal communication like facial expressions or gestures, and have trouble with multiple people speaking at once.
AI-driven personalisation and adaptive interfaces
AI can now adjust how websites and apps work based on how people use them. These systems aim to make interfaces easier to use by learning from user behaviour.
Some tools try to adjust website settings for better accessibility automatically through overlays. However, what usually happens is that it makes the website even less accessible than before, as the tools in question interfere with users’ existing assistive technology setups and preferences. For example, they might try to adjust the text size or change the page contrast, but if the user already has traditional software that takes care of that, they might break existing settings.
This highlights an important point: automated solutions shouldn’t override user preferences or assistive technology settings that people rely on.
Another feature these tools offer is automated repair, such as applying alternative text, fixing field labels, keyboard access issues, and other similar problems. However, they are not recommended as none of the changes are actually reliable nor have they been proven to have great accuracy or even usefulness.
AI for cognitive accessibility
AI tools can help make content easier for people with cognitive disabilities and others to understand and process. BeeLine Reader uses AI to add colour gradients to text, making it easier for some people to read and maintain focus. This can be particularly helpful for people with dyslexia or ADHD.
AI language models like Claude can simplify complex text or break down long documents into more manageable chunks. This helps people who might find dense text overwhelming or difficult to process.
But these tools have limitations, too. BeeLine Reader’s colour-based approach might not work for people who are colourblind, and AI summarisation tools sometimes oversimplify or miss important details. Plus, what’s “simpler” for one person might be harder to understand for someone else.
Privacy, bias, and accuracy in AI accessibility tools
AI accessibility tools can help make the web more accessible, but it’s important to understand their limitations and challenges:
- Privacy concerns affect many AI accessibility features. For example, when an AI tool helps describe images or read text aloud, it might process personal information that users would prefer to keep private. Similarly, someone using AI speech recognition in a workplace might unintentionally share sensitive conversations with the AI system.
- Bias in AI systems can create unfair experiences. Speech recognition might struggle with regional accents, speech differences, or a combination of several factors, making it less useful for some users. Image description AI might work better for some cultural contexts than others but miss important details that matter to different communities.
- Accuracy is another challenge, especially for real-time tools. Live captioning AI can make mistakes that change the meaning of conversations. Most commonly, they might miss important details or context that human describers would include. For people relying on these tools for important information, these errors can be more than just inconvenient – they can lead to misunderstandings or missed information.
A balanced approach often works best: use AI tools where they can help but combine them with human expertise.
Ethical AI development: ensuring inclusivity and transparency
Whatever your product is, people with disabilities should be involved throughout the entirety of the AI development – not just as testers at the end. Their experiences and feedback help create better tools that actually solve real problems. This means including people with different disabilities, backgrounds, and ways of using technology.
The data used to train AI matters, too. If AI systems only learn from limited datasets, they won’t work well for everyone. For example, speech recognition AI needs to learn from people with different accents, speech patterns, and languages to be truly useful.
Caitlin de Rooij – Accessibility Consultant & Developer at Level Level. Product Owner at The A11Y Collective.“We need to be clear about what AI can and can’t do. Companies developing AI accessibility tools should explain how their systems work and their limitations. This helps users make informed choices about which tools to use and when to seek alternatives. However, we need to remember that AI cannot replace existing accessibility practices because it learns from a flawed source – 96% of the web is inaccessible. So, good accessibility still needs human understanding, testing, and support.”
Empower your accessibility skills with The A11Y Collective
AI tools can help make websites and apps more accessible, but they work best when combined with solid accessibility knowledge and skills. Understanding the fundamentals of accessibility helps you make better choices about when and how to use AI effectively.
That’s where The A11Y Collective comes in. Our courses teach you the accessibility skills that AI can’t replace – like how to structure content meaningfully, write helpful alternative text, and create intuitive navigation patterns. Whether you’re a designer, developer, or content creator, you’ll learn practical techniques that work alongside new technologies.
We offer courses for every skill level, covering everything from accessibility basics to advanced topics like ARIA. You’ll learn from experienced professionals who understand both the technical and human sides of accessibility.
Ready to build your accessibility expertise? Check out our course catalogue to find the right learning path for you. Together, we can create a web that works better for everyone – with or without AI assistance.
Ready to get started?
Our “Web accessibility, the basics” course is the perfect starting point for anybody looking to level up their web accessibility knowledge.