Data-Driven Accessibility Systems

Microsoft Research New England aims to build the accessibility systems of the future. In particular, we are focused on building systems to better support sign language users and low-vision readers.

Accessibility is a major concern for many people with disabilities. Over a billion (opens in new tab) people worldwide (and nearly one in five in the U.S.1) live with some form of disability. Despite this large group, which is growing as the world’s population ages, many technical systems are not easily accessible to people with disabilities. For example, touchscreens requiring fine-tuned gestures exclude people with a vast array of motor impairments; traditional alarm clocks and doorbells poorly serve people who are deaf or hard-of-hearing; and visual informatics exclude people with visual impairments.

There are about 70 million (opens in new tab) deaf people using a sign language as their primary language, for whom written languages like English can be inaccessible. Contrary to common misconception, there is no universal sign language, and sign languages are not simply signed versions of spoken languages. For example, American Sign Language has a vocabulary and grammatical structure very different from English. Sign languages also lack a standard written form, meaning that many Deaf people cannot access text in their primary language. Written English can be inaccessible as well, as it can be extremely difficult to learn a language without hearing it. This background frames many high-impact research challenges in sign language, including technical problems such as language recognition and translation, as well as social computing systems that could enable people to more easily learn sign language or to connect with other sign language speakers via new kinds of online communities. Working closely with the Deaf community, we are pursuing a range of research projects in this direction, including building corpora of signing data that can be used to train sign language recognition and translation algorithms

Worldwide, there are also almost 300 million (opens in new tab) people with visual impairments, which make reading text difficult. The small text of personal devices can be particularly inaccessible to many people. Sighted users strain to read small letters, especially without glasses at hand; and low-vision users, whose vision is not correctable with glasses, must rely on memory, zoom that restricts visible content, or text-to-speech software. There are many technical challenges involved in supporting low-vision readers. In particular, to help improve and enhance the reading experience, we have been exploring redesigning traditional letterforms, leveraging modern screen capabilities such as color, animation, and a larger diversity of shapes. We call these new alternate letterforms that challenge the assumption that text must be rendered in traditional letterforms Smartfonts. Personal devices have made this challenge possible, by allowing users to adopt new character systems without language reform or mass adoption. We are currently building and evaluating systems that allow users to more easily learn and use these alternate scripts.


1by the U.S. Census

People

Portrait of Danielle Bragg

Danielle Bragg

Senior Researcher

Portrait of Hal Daumé III

Hal Daumé III

Principal Researcher

Portrait of Alex Lu

Alex Lu

Senior Researcher

Portrait of Philip Rosenfield

Philip Rosenfield

Principal Research Program Manager