AI for Accessibility projects
Learn how our grantees are using AI powered technology to make the world more inclusive.
Improving Braille literacy skills via gamification
Using speech recognition with a Braille display, Braille AI Tutor empowers students in Braille literacy via gamification so they can practise independently or during distance learning.
Empowering students who are blind with I-Assistant
inABLE and I-Stem created I-Assistant with text-to-speech, speech recognition and computer vision to give blind and low vision test-takers an alternative to in-person readers and writers with an AI-powered conversational experience.
Chatbot to enable support for people with disabilities
Open University students with disabilities collaborated with professors to create the ADMINS chatbot assistant. ADMINS helps solve the barriers to independent living caused by forms and processes.
Captioning with AI technology
Rochester Institute for Technology (RIT) is exploring various ways to improve captioning options for students who are Deaf or hard-of-hearing, including automatically removing disfluencies and adding punctuation.
Expanding inclusive hiring
Our Ability is advancing inclusive hiring by offering employment seekers with disabilities an accessible and intuitive AI-powered chatbot to help them identify skills and create a profile to match existing jobs posted in the portal.
VR job interview training for people with Autism
The Frist Center for Autism and Innovation at Vanderbilt University is developing a virtual job coach for candidates with autism to prep for job interviews by detecting stress and attention of interviewees. Their solution also includes an employer dashboard.
AI to support manufacturing and distribution jobs
In partnership with Gigi’s Playhouse and TRI Industries, Clover Technologies has designed and developed an open source conversational AI application to help people living with Down Syndrome and/or Autism perform routine jobs in manufacturing environments.
Helping jobseekers find their career paths
Leonard Cheshire Disability is collecting data to develop an algorithm that will help jobseekers with disabilities identify their interests and career goals in an accessible and intuitive way.
Bringing mental health research and AI together
Partnering with Mental Health America (MHA), Northwestern University and University of Toronto are developing an adaptive, AI powered text-messaging platform for interventions designed to deliver engaging, personalised support for young adults who may not seek formal mental health treatment.
AI tools for mental helplines
Befrienders India and the Social Dynamics and Well-Being (SocWeB) Lab at Georgia Tech are using AI to match crisis line callers with volunteers based on the needs of the callers and the experiences of the volunteers.
Understanding empathy in text-based peer support
Researchers at the University of Washington (UW) are working with TalkLife and Supportiv to train natural language models to recognise empathy in text-based messages, then offer suggestions to make responses more empathetic.
Improving youth mental health peer helplines with AI
TeenLine counsellors are exploring data annotation and machine learning modelling to understand and map teen language to mental health issues and symptoms for effective management of teen helplines.
Ability Initiative at University of Texas at Austin
The University of Texas at Austin (UT Austin) works with Microsoft Research and AI for Accessibility to collect an expansive labelled dataset to improve the accuracy of automatic image descriptions of photos captured by people who are blind or have low vision.
Safety and independence for people who are blind or low vision
WeWALK’s ergonomic attachment to the white cane uses inbuilt motion sensors, voice control and machine learning models to improve the wayfinding and mobility experience for people who are blind or have low vision.
Personalising and improving object recognition with AI
Through project ORBIT (Object Recognition for Blind Image Training), City University of London is collecting data and developing experimental algorithms for improving personalised AI object recognition.
Personalised and assistive navigation for all pedestrians
Existing navigation apps are primarily for cars, not pedestrians. Using machine learning and inclusive crowd sourcing, iMerciv is building MapinHood, an app for pedestrian travel, providing assistive navigation for everyone.
Understanding non-standard speech patterns
Voiceitt is building automatic speech recognition technology designed to understand non-standard speech patterns in order to provide individuals with speech disabilities an enhanced communication platform.