Skip to Main Content
$86,715
$125,000

Donate Now to Support Digital Equity for People with Disabilities

Live: A11y, AI & Machine Learning: Opportunities & Threats

taught by: Christopher Land


Session Summary

Discussion on new technologies impacting people with disabilities, both good and bad, including innovations on the horizon and around the corner presenting promise and risk.


Description

Watch this presentation in the Knowbility Learning Center

Emerging technologies of artificial intelligence, big data and machine learning hold great promise in assisting people with disabilities in living independently. At the same time, the technologies themselves are morally neutral and could be used to discriminate against people with disabilities, unintentionally or overtly.
Necessity is the mother of invention, and technologies initially developed to help overcome various disabilities have grown into tools that help everyone. Alexander Graham Bell, inventor of the telephone, was a teacher to deaf students. Early typewriters were developed in part to help the blind write letters. Texting was initially developed as a way for users with low or no hearing to use the phone and is now a daily form of communication for billions. The same can be said for autocorrect and autosuggest functionality – critically helpful for users with cognitive and mobility impairments, but now beneficial to everyone.
Technology has advanced in leaps and bounds offering unprecedented tools to help people with disabilities. Artificial intelligence has advanced accessibility immeasurably. Optical character recognition allows blind users to read independently. OCR devices developed in the 1970s took up the better part of a room; today any user with a smartphone can scan the text before them and hear it read aloud. Voice recognition technology has become invaluable to people with mobility impairments, allowing them to talk to the computer, navigate websites, dictate letters and control AI helpers. Machine learning has even enabled voice recognition to be trained to help users with speech impediments. The applications of voice recognition in helping people with disabilities are only limited by the imagination. A father recently set up a speech output device for his son to interact with Amazon Echo. The boy was born with cerebral palsy, is blind and cannot walk or talk, and his movement is limited to one hand. Using these devices, the son can interact with the internet and make phone calls to loved ones independently. Using Alexa and Raspberry Pi technologies, a developer has created a proof of concept motorized wheelchair which can be fully controlled via speech.
Machine learning and big data allow AI to extract meaning from images and video, which can be applied to help people with disabilities. This is already being used to provide alternate text to images on the web. But AI has also been developed that can read lips at a higher accuracy rate than human lip readers. This technology can be used to increase the accuracy of automatic AI captioning in instances where the speaker is shown in the video, or it could be adapted to smartphones.
CAPTCHA techniques, which differentiate real users from troublesome bots on the web, have been a recurring obstacle for users with disabilities. As the AI bots get smarter the CAPTCHA techniques have become more challenging, creating barriers to people with disabilities. The idea of using facial recognition technology with a web cam has been put forth as an option to get around this barrier.
AI can now comprehend text at a sufficient level that complex text with big words can be effectively summarized or reduced so that it is more easily understood by people with cognitive disabilities. This technology also serves people who are reading in a second language, as well as simply people in a hurry.
This is all very promising, but there is also a dark side. Technology itself is amoral and presents various risks depending on who yields it. New technology offers the promise of providing robot care givers to help the elderly and users with disabilities with getting around and completing daily tasks independently. But it doesn’t take much imagination to take a robot initially developed to provide care and use it for purposes less noble.
Discrimination against people with disabilities is a salient risk in artificial intelligence. As with age, gender and race, machine learning systems can develop bias against disability status if the developers are not aware of the issue and do not design systems with inclusion intentionally. Early facial recognition systems did not work for users with dark skin, and job screening AI has developed bias against female candidates. AI developed to predict recidivism became racially biased, wrongly denying release to African Americans.
Disability status bias is even more challenging to control for than age, gender or race bias – disability is not always shared or obvious, and the range of disability types makes this group very diverse. We must recognize that the data sets we use to train machines must be inclusive, and we must be aware of the potential bias so we can control for it. By their nature, machine learning systems optimize for norms. They’re based on statistical frequency in data and therefore minimize outliers.
Big data can undermine health privacy protections and “out” people with disabilities without their consent. Even more troubling is that this data is owned by profit-driven companies and is unregulated. Data collected on user behavior via online traces can be compiled by social media and marketing companies using behavior to profile users with disabilities and market to or manipulate them. This could be harmless or even potentially beneficial to them, for example if this exposed the user to a new assistive technology for their disability. But marketers can also target users’ vulnerabilities.
Advertisers have been accused of using Facebook’s platform to overtly discriminate against people with disabilities using collected data, and Facebook was penalized for this in 2019 by the Department of Housing and Urban Development. In choosing which groups would be delivered ads for housing properties, advertisers were given the option to omit certain groups, including people with disabilities. HIPAA protects health related information only as it applies to healthcare and related industries and does not apply to social media.
AI is used in other types of people analytics, including screening for jobs, college recruiting, and health and life insurance. If unchecked, these systems are likely to encourage discrimination against people with disabilities. For example, a user with disabilities taking an online job screening test may take longer completing the test using assistive technology. This person may be automatically screened out before being considered by a human interviewer. Globally, consider China’s Social Credit System, which analyzes citizens’ behavior using AI electronically and provides rewards and punishments based on what the state views as good behavior. What happens when this type of program is rolled out in societies that stigmatize disability?
To protect people with disabilities from potential risks of AI we need to expand legal protection (ADA and HIPAA), which was put in place before these technologies were as prevalent as they are today. Additionally, we need to expand education on this topic so AI developers design systems to be inclusive from the start. Tools have started to become available in this area, such as IBM’s AI Fairness 360 toolkit, to assist in mitigating bias. More work must be encouraged in this direction.
The emerging technologies of machine learning, big data and artificial intelligence open up wonderful opportunities in advancing accessibility. The potential for increased discrimination is also a risk, which can hopefully be mitigated through education and expanded legal protection for people with disabilities.


Practical Skills

  • Gain an understanding of upcoming technological advances that will provide rich benefits to people with disabilities.
  • Expand awareness of technological threats to people with disabilities as new tools are deployed, such as automated screening tools in the hiring process which could contain bias.
  • Understand the ways we can combat bias and threats against people with disabilities as we build these new systems, and how we can spread awareness to fight potential problems.