Disability Bias & New Frontiers in Artificial Intelligence
taught by: Christopher Land
Bias in artificial intelligence (AI) systems can cause discrimination against marginalized groups, including people with disabilities. Learn how AI systems and machine learning work and how we can create AI systems that provide fair and equal treatment to everyone.
Bias and discrimination in data science, artificial intelligence and machine learning are established problems that technologists have struggled to control since the inception of AI. The power of machine learning comes from pattern recognition within vast quantities of data. Using statistics, AI can reveal new patterns and associations that human developers might miss or lack the processing power to uncover. But these patterns are a best fit for the greatest average of the majority, and marginalized groups are often left out of the data. When they are, AI systems can be discriminatory.
This presentation will provide background on how artificial intelligence and machine learning work and explain how bias is introduced into AI systems if not proactively prevented. The approaches different nations are taking to reduce bias in AI systems and ensure that all citizens, regardless of disability status, are receiving fair and equal treatment within data science will be reviewed, and recommendations on strengthening legal protections for people with disabilities will be provided.
- Provide an overview of AI systems and machine learning, including disability bias, for accessibility professionals and related non-development roles.
- Discuss methods for building accessible AI systems inclusively to mitigate bias.
- Explore current worldwide progress on establishing legal protection against AI bias, with recommendations on strengthening laws to protect people with disabilities from discrimination by AI systems.