Automated Accessibility Tooling: Hype versus Reality
taught by: Nicholas Cook
Session Summary
Some accessibility tooling vendors estimate that their products can catch anywhere from 30% to 80% of “accessibility issues”, but what is defined as an “accessibility issue”? How are these numbers calculated? In this talk, we’ll disambiguate hype from reality for the most common class of automated accessibility tooling, Static Code Analysis (SCA), and how new automation techniques can expand the universe of what’s possible.
Description
“Automation” means more than just linters and static code analysis. While those are great tools, and should continue to be used by everyone, they have some pretty big gaps in coverage for high severity WCAG success criteria (No Keyboard Trap, Focus Order, Label In Name, On Focus, and On Input just to name a few). In this talk we’ll review different automation tools (linters, static code analysis, static code analysis in flows, end-to-end testing with role selectors, and accessibility-first end-to-end testing). We’ll cover both how they’re useful as well as the various pitfalls they have with respect to specific WCAG success criteria. We’ll work through examples of accessibility bugs in practice, and determine which tools would catch them and which wouldn’t.
The goal of this talk is not to market a specific tool as “better” than others, but to increase understanding around all of the available tooling, and hopefully put some hard data behind the skepticism we all feel around claims that automation can solve everything. All of these tools are great ways for understanding where your product stands with regards to accessibility, and having a better understanding of the differences between them will help inform decisions about how to guarantee accessibility coverage across the board, but it’s crucial that we understand their limitations in order to make the best use of them.
Practical Skills
- How other types of automation complement static code analyzers and help create more complete coverage.