Skip to Main Content

Where AI Testing Ends and Accessibility Responsibility Begins

taught by: Matthew Elefant


Session Summary

Automation has transformed accessibility testing—but responsibility can’t be automated.
This session breaks down where AI accelerates accessibility efforts and where human judgment is essential to avoid false confidence, usability gaps, and real-world risk.


Description

AI testing has become a powerful accelerator for accessibility—but it’s also being misused in ways that create false confidence and real risk. In this session, we’ll break down where AI testing belongs in the accessibility lifecycle, and where relying on it can lead to usability failures, compliance gaps, and legal exposure.

Drawing on real-world experience—and Inclusive Web’s recognition as an Inc. Magazine Best in Business award winner for AI in Social Good—this talk explores how to apply AI responsibly, why human testing remains essential for judgment and validation, and how organizations can build accessibility programs that are fast, defensible, and grounded in real user experience. The goal isn’t to replace AI—it’s to use it in the right places, for the right reasons.


Practical Skills

  • Understand where AI testing should—and should not—be used within the accessibility lifecycle to reduce risk, improve accuracy, and avoid false confidence.
  • Learn how to balance AI automation with human testing to build an accessibility program that is scalable, defensible, and effective for real users.
  • Explore how custom RAGs (Retrieval-Augmented Generation) and AI tools can enhance accessibility testing, reporting, and decision-making—when applied with the right guardrails.