Amber Qualm is an Accessibility Consultant working at Level Level. With over ten years of experience in digital product design, she combines her knowledge of user experience and accessibility to help companies, designers and developers create digital products that are accessible for all users.
Accessibility testing with real users transforms good websites into truly inclusive ones. While automated tools and WCAG guidelines provide a foundation, they can’t capture the actual experience of people using your website with assistive technologies.
Think about it: a website might pass every compliance check yet still frustrate someone using a screen reader to browse your product catalogue or someone relying on keyboard navigation to fill out your contact form.
That’s why testing with users who have disabilities makes such a difference. It helps you:
- Find usability barriers that automated checks miss.
- See firsthand how people use assistive technologies.
- Build features that work well in real situations.
- Make accessibility part of your development from day one.
This guide shares practical methods for gathering meaningful feedback from users with disabilities based on W3C’s framework for user testing. We’ll walk through everything from planning your tests and finding participants to conducting sessions and putting the findings to work. You’ll get specific examples and tested approaches that help create better experiences for everyone.
Advancing your accessibility testing expertise
Accessibility user testing puts real people at the centre of your website development, letting you improve the overall user experience and avoid legal troubles. It’s a practical way to understand how different users interact with your website – giving you direct insights you won’t get from automated checks and manual accessibility testing alone.
Let’s break down why this matters. Your testing approach needs to combine three elements:
- Automated testing tools that scan your code and catch technical issues.
- Manual testing to verify accessibility features.
- User testing with people who have disabilities to understand real usage patterns.
Each method plays its part, but user testing shows you things the others can’t. For example, a screen reader user might point out that your product filtering system works technically but takes far too many steps to use effectively.
You have three main ways to run these audits:
- Work with specialist testing services like UserTesting.com and Fable that connect you to experienced testers.
- Run remote sessions where participants use their own assistive technology setup.
- Conduct in-person lab testing to observe and gather immediate feedback.
While all approaches work well, in-person testing often provides the richest insights – which we’ll explore in detail next.
Step-by-step accessibility testing methodology
Running effective accessibility tests requires careful planning and respectful execution. Let’s walk through a proven approach that helps you gather valuable insights while ensuring everyone involved feels comfortable and valued.
1. Planning effective accessibility user testing
- Set clear, measurable goals that align with WCAG 2.2 success criteria. You might aim to measure specific outcomes, such as whether users can complete your checkout process or navigate your menu system without getting stuck.
- Identify your test groups. Focus on people who use different assistive technologies, such as screen readers like NVDA or JAWS, software such as Voice Control for Apple, or magnification tools. Remember that within each group, users will have varying levels of technology experience.
- Find the right participants. Many organisations work with specialist services like AccessWorks or AbilityNet, which maintain relationships with experienced testers. You can also build connections through disability organisations such as the National Federation of the Blind. Social networks offer another avenue – LinkedIn accessibility groups, Bluesky’s A11yFeed community, and Reddit’s r/accessibility all host engaged accessibility professionals and users.
- Allocate your budget to cover several specific areas. Plan for participant compensation ($50-150/hour), and make sure participants have the optimal conditions to test with their own equipment like laptops, mobile devices, and alternative keyboards. Don’t forget travel expenses and accommodation – participants might need accessible transport or hotel rooms.
- Ask about specific requirements and accommodations. This might include asking which versions of assistive technology participants use, their preferred testing environment setup, if they need any specific accommodations, from breaks to support workers, and their experience level with similar websites or apps.
Taeke Reijenga – Founder & trainer at The A11Y Collective“Set aside 2-4 weeks for recruitment and run pilot tests to iron out technical issues. Before the main testing begins, schedule short technical check-ins to verify that everyone’s equipment works together smoothly.”
2. Conducting tests and accessible environments setup
Okay, you have everything prepped, but how do you actually conduct your tests? Well, first, you need to divide the participants into groups, and if you think that a bigger group is the best option, think again.
Research by Jakob Nielsen actually shows that testing with five users typically uncovers about 85% of usability problems, and running multiple small tests throughout development gives better results than one big test at the end. Of course, you can always adapt the process to your specific needs, but just keep in mind that you don’t need a large group and long tests to get the results you need.
Next, you need a good location. The testing space needs to be accessible – but that goes beyond just the room itself. Think about the entire journey: if a participant arrives by train, is there a clear, obstacle-free route from the station? Will someone using a wheelchair encounter any barriers? Always provide detailed route options and be ready to suggest alternatives.
Once you have a great location, go there beforehand to prepare the testing environment. Here’s what you need to consider:
Equipment needs | Environmental factors | Support ready |
---|---|---|
Backup keyboards. | Good lighting. | Note-taking tools. |
Different chargers. | Minimal background noise. | Recording equipment. |
Various mouse options. | Clear space for mobility. | Refreshments. |
Service animal area. | First aid kit. | |
Accessible toilet |
Remember that while some requirements are universal (like the ones we just listed), you still need to adjust your testing protocols based on individual needs. Someone with motor impairments might need longer task completion times, while a person using a screen reader might prefer receiving instructions verbally rather than in writing.
Now, when it comes to the tests themselves, we recommend focusing on real tasks rather than isolated features. For example, if you’re running tests for an eCommerce site, “Find a birthday gift under $50 and check out” tells you more than “Test the add to cart button.“
Remember that participants are experts in their own experience. If they suggest a different way to complete a task or use an assistive technology in an unexpected way, that’s valuable information about how your site works in the real world.
Finally, keep in mind that testing works best when fully integrated into your development process instead of being an afterthought. Small, frequent tests throughout development catch issues early when they’re easier to fix.
3. Documenting and analysing feedback
Turning user feedback into meaningful improvements requires a systematic approach to data collection and analysis. Start by tracking specific metrics that align with W3C 5.5 test objectives – like how many attempts users need to complete a task or where navigation breaks down.
Record your findings in multiple ways to capture the full picture, and if a participant finds a recording method distracting, be ready to switch to an alternative:
- Screen recordings show exactly where users get stuck.
- Audio recordings catch verbal feedback and frustrations (with consent, of course).
- Video recordings to capture facial expressions and emotions (with consent, of course).
- Written notes help track patterns across different test sessions.
Remember to measure both hard numbers (time spent, error rates) and user reactions (confusion points, positive moments).
Here’s an effective way to structure your findings:
Priority level | What to include | Example |
---|---|---|
High impact | Issues blocking task completion. | Screen reader user can’t access menu. |
Medium impact | Problems causing frustration. | Form error messages unclear. |
Low impact | Minor inconveniences. | Alt text starting with “image of”. |
Link each finding back to specific WCAG criteria and note which assistive technologies were affected. For instance, a keyboard trap in your navigation menu violates WCAG 2.1.2 and affects all keyboard users, regardless of their specific assistive technology.
One particularly useful approach is implementing a “testing diary” system. Ask participants to note down thoughts about your site as they use it in their daily lives after the formal testing. This often reveals issues that don’t surface during structured testing – like how your site performs with different magnification settings or various lighting conditions.
Finally, create a clear action plan based on your findings. Rather than fixing issues based purely on technical ease, prioritise changes that will most improve the user experience. Share specific examples of problems and the video recordings of the participant’s reactions with your development team – it helps them understand the impact of their products and why changes matter.
Best practices for accessibility user testing
Creating consistent testing conditions helps produce reliable results, but don’t let standardisation get in the way of real-world usage. Ideally, the participants will have standardised device configurations, but if they don’t, that’s also fine. Just make sure you document the specific software versions while keeping backup equipment ready for when things don’t go as planned.
The way you communicate with participants shapes the quality of feedback you’ll receive. Ask, “How would you normally complete this task?” rather than “Can you do this?” This small change in phrasing shows respect for participants’ expertise and often reveals unexpected insights about how people use your site.
Keep your testing structure flexible but organised:
- Start with a warm-up task to help participants feel comfortable.
- Present tasks in a consistent format.
- Give people time to solve problems their own way before offering help.
- Write down both successful approaches and dead ends.
Most importantly, remember that participants are sharing their time and expertise with you. Make sure they know their feedback is valued and will lead to real improvements. Follow up after testing to show them how their input has helped make your site more accessible – it builds trust and encourages future participation.
Level up your accessibility testing skills with The A11Y Collective’s courses
Testing with users makes the difference between technically accessible websites and ones that work brilliantly for everyone. While we’ve covered the main approaches to accessibility testing in this guide, putting these methods into practice takes both knowledge and experience.
That’s where The A11Y Collective’s courses can help. Our practical sessions take you from accessibility basics through to advanced techniques, helping you build confidence in working with users who have disabilities. You’ll learn directly from experienced practitioners who understand both the technical and human sides of accessibility testing.
Want to develop your team’s accessibility skills?
Check out our expert-led courses to start creating genuinely inclusive websites that work for all your users.