Skip to Main Content
$68,198
$125,000

Donate Now to Support Digital Equity for People with Disabilities

No Controls, No Screen, No Problem: Using Voice & Gesture To Interact With Learning Tools

taught by: Kelly Lancaster
co-presented by: Damien Brockmann


Session Summary

Machine learning libraries - now available in front-end applications - offer a new set of tools for inclusive design. In this session, you will “train” an interactive application to associate voice and/or gesture commands with actions in the browser to accomplish a goal. Together, we will consider the affordances that machine learning provides for accessibility in education technology.


Description

We will start the session with an overview of Macmillan Learning’s accessibility strategy and demo examples of our interactive content, focusing on keyboard navigation, screen reader alerts, and sonification. We will then provide a brief introduction to machine learning and open-source libraries that can be used in front-end applications, focusing specifically on Google’s TensorFlow.js. These libraries allow for more ubiquitous implementations of voice and camera recognition in web applications and offer enhanced experiences for users. We will demo voice and gesture recognition applications and engage the class with an activity that prompts participants to “train” a complex interactive to respond to voice and/or gesture commands of their choosing. Each participant will have the opportunity to share their prototype with the group. We will discuss the opportunities and challenges that such interactions provide for accessibility and consider the possibilities for a future in education technology that is no longer connected to a keyboard and a mouse.


Practical Skills

  • Inclusive design
  • Rapid prototyping
  • Machine learning concepts and tools