CHI 2021: Redefining accessibility to build more inclusive technologies

Published

Accessibility and inclusion represent a growing space in the technology landscape, and how research and development are being used to empower people across abilities is expanding in exciting ways.

Instead of treating disabilities as conditions in need of solutions—as has been the case over the years with the medical-based approach to accessibility—research is moving toward a social approach, examining and addressing the societal expectations imposed on people with disabilities and society’s lack of inclusion. The world comprises people of various abilities but accommodates the abilities of the majority. A world in which everyone were blind would be designed very differently from the world we know today.

This leaves a gap between what people with disabilities are capable of and what the environment set up for the majority enables them to do. Technology can play a key role in closing this gap. Starting from one of these gaps and working alongside impacted communities, researchers and technologists can begin pushing beyond making something accessible to providing tools for people to augment their own capabilities, says Microsoft Principal Researcher Cecily Morrison.

About Microsoft Research

Advancing science and technology to benefit humanity

In “Social Sensemaking with AI: Designing an Open-ended AI experience with a Blind Child”—one of several Microsoft papers in accessibility and inclusion featured at the 2021 ACM CHI Virtual Conference on Human Factors in Computing Systems (CHI 2021)—Morrison and her coauthors demonstrate how technology can be designed in a way that enables people to define how they live their lives rather than dictating what they can accomplish. Their PeopleLens prototype, an intelligent headset, provides children who are blind with information such as the names of people in a physical space to help them experience social agency and have opportunities to develop their social interaction skills.

“It’s really shifting the power from a researcher to those users in terms of formulating their own future,” says Morrison.

Two CHI papers from Microsoft Research India explore a divide impacting how well technologists and researchers can meaningfully narrow the ability-environment gap—the inequality among resources.

“Even though the majority of people with disabilities are living in the Global South, most of the work is targeted toward resource-rich Global North environments,” says Principal Researcher Manohar Swaminathan.

In the first paper, researchers gain insight into how experienced smartphone users in India who are blind navigate Android devices specifically, as Android devices are the more affordable option in the country. To help children in India who are blind receive computer science education, authors of the second paper worked directly with teachers of students who are blind, many of whom are blind themselves, to understand the opportunities and challenges of deploying technology in settings lacking technical support and computing infrastructure.

Meanwhile, another CHI paper shows that understanding people’s needs and expectations is only one of many steps; particularly when it comes to AI, data that will support the development of tech that’s effective and helpful in the real world is essential. With the game prototype ASL Sea Battle, researchers aim to provide immediate benefits while also facilitating the collection of much-needed quality data to further help fill the void in tools and services available to signers.

Explore these papers in more detail below, and check out the CHI workshops on accessibility and inclusion co-organized by Microsoft researchers:

Using games to collect and label richer data for the creation of signed language systems

In short: The main barrier to developing intelligent signed language systems is a lack of training data from diverse signers in diverse environments. ASL Sea Battle collects and labels real-world videos from signers while also providing a valuable resource for people who use signed language.

A deeper dive: In “ASL Sea Battle: Gamifying Sign Language Data Collection,” which received honorable mention at CHI, researchers introduce a smartphone game prototype based on the classic strategy game Battleship. Traditionally using a number-letter pair to guess the position of an opponent’s ship on a grid, players of ASL Sea Battle identify their location selections instead by sign labels given to each square. To pick a location, players submit a video of themselves executing the sign of their chosen square; opponents show the success of the choice by selecting the corresponding square on their grid. The exchange facilitates not only the collection of real-world signing data but also the labeling of it. The research team, which includes fluent signers who are deaf and American Sign Language (ASL) linguists, engaged in an iterative design process, prototyping a variety of games with members of the signing community.

Forging new ground: The researchers believe their efforts are the first when it comes to leveraging games to collect signed language data.

The inspiration: The current state of the art in signed language systems is far behind that of spoken/written language systems. For example, there are many resources serving English speakers, ranging from email clients to online articles, but few serving people who prefer to use a signed language. And because signed languages don’t have a standard written form, today’s text-based interfaces typically fail to support them. The inequity exists for a few main reasons: most technologists are spoken/written language users, signed language support is not typically considered or prioritized, and there’s a lack of sufficient training data.

Areas of impact: Gamifying signed language data collection has the potential to produce larger, more representative signed language datasets. Such datasets would enable the creation of machine learning models that work well for a more diverse set of end users in terms of personal demographics such as race and gender but also in terms of linguistic elements such as regional dialect. For example, no general sign language recognition or translation systems currently exist that are viable for real-world use. Larger, more representative datasets can help change that.

Key takeaway: If you’re a researcher working in a data-scarce domain—and particularly with marginalized communities—explore building resources, like games and educational apps, that can serve the community while also collecting data with participant consent and using secure protocols and servers to help protect privacy. Such methods can provide immediate and direct benefit to a community. They also can remove barriers to participation, expanding inclusion in the data collection initiative itself, which is important to building ML applications that better serve the diverse group of people who will use them.

The research team: Danielle Bragg (opens in new tab) and William Thies (opens in new tab) of Microsoft Research; former Microsoft Research contractors John W. Gallagher and Courtney Oka (opens in new tab); and Naomi Caselli (opens in new tab) and Miriam Goldberg (opens in new tab) of Boston University.

The PeopleLens: Using AI to help children who are blind build their social skills

A smiling young boy who is blind and wearing a head-mounted augmented reality device with a semicircular LED interface is seated at a kitchen table. A man, also smiling, is seated next to him, a laptop in front of him. In the background, another man is standing behind a kitchen island, working on a laptop.
To help children who are blind or have low vision cultivate their social skills and experience social agency, researchers developed the PeopleLens, a head-mounted augmented reality device that identifies the location of people in the wearer’s social environment. Over a seven-month period, the team worked with Theo, a young boy who is blind, to explore the real-world use and potential of the prototype. This photograph was taken by Jonathan Banks.

In short: Microsoft has focused on AI that extends human capability rather than replaces it. Researchers worked with people who are blind or have low vision to imagine what that might mean practically for society’s collective future with AI. The outcome is a research prototype called the PeopleLens, an assistive agent running on smart glasses that helps users who are blind or have low vision understand who’s in their immediate social environment. 

A deeper dive: In “Social Sensemaking with AI: Designing an Open-ended AI experience with a Blind Child,” researchers report a seven-month exploration with a boy who is blind and the PeopleLens. The PeopleLens uses a head-mounted augmented reality device designed to enable children who are blind or have low vision to gain a dynamic, real-time understanding of their social environment through spatial audio. For example, whenever the user passes their gaze over another person, they hear the person’s name or, if not identified, a spatialized sound to indicate a person’s presence. This experience demonstrates a new class of AI experiences that goes beyond short tasks.

The inspiration: AI has tremendous opportunity to close the gap between what people can do and what their environment allows them to do and to level the field for people who experience the world in different ways. Deep research powered by deep engagement with communities of varying abilities and their advocates has much to show about effectively closing the gap and building AI tools that empower. 

Areas of impact: The researchers’ recent focus has been on enabling children who are born blind to develop social agency. Many children who are blind struggle to interact socially, as it’s a skill learned incidentally through vision. For example, if a child who is blind is unsure where their conversation partner is, they may have difficulty aiming their voice appropriately or they may position their body in a way that conveys disengagement. Providing social information in spatial audio provides children an opportunity to find their friends and build out those skills to direct their attention to others and others’ attention to themselves—the key to social agency.

Key takeaway: When closing the gap between people with different sensory experiences, technologists must consider not only the user but also the people interacting with the user. For example, it’s equally important for a user to “see” a communication partner as it is for the partner to know they’ve been “seen”. Researchers can engage more with how technology can create common ground and spread the responsibility of closing the gap among those interacting.

User-experience prototyping: Researchers illustrated a substantial difference between imagined and actual use of the PeopleLens. It’s hard for a person to imagine their lives with a new set of information; testing a prototype and iterating can productively engage the imagination of what’s possible. For example, the researchers’ participant wanted all the information his peers who weren’t blind had. He was overwhelmed and realized he instead needed information that helped him focus on the interactions he wanted to have. Differences in imagined and actual use also suggest that open-ended AI systems can empower users to extend their capabilities in ways technologists can’t necessarily imagine.

The research team: Alex Taylor (opens in new tab) of City, University of London; Cecily Morrison (opens in new tab), Edward Cutrell (opens in new tab), Martin Grayson (opens in new tab), Anja Thieme (opens in new tab), Camilla Longden (opens in new tab), Rita Faia Marques (opens in new tab), and Abigail Sellen (opens in new tab) of Microsoft Research; former Microsoft researcher Sebastian Tschiatschek (opens in new tab); and Geert Roumen, who was a Microsoft Research intern at the time of the work.

Leveraging the knowledge of experienced users to build more accessible smartphones

In short: Informing the future of smartphone accessibility via the usage patterns of expert users who are blind. 

A deeper dive: In “Smartphone Usage by Expert Blind Users,” researchers recruited individuals who are blind and have extensive experience using Android smartphones and its screen reader app TalkBack to inform the design of more accessible devices. Based on logged phone usage over the course of a month and semi-structured interviews, researchers found that the group sought the same interaction speed achieved by phone users who aren’t blind and tailored their behavior to that end. Through the examination of a variety of behaviors, from app usage to phone locking to battery charging, and feedback on security concerns and learning to use TalkBack, researchers discovered new engagement styles. They included combining TalkBack’s “explore by touch” with directional gestures to navigate the phone faster, a preference for voice and external keyboard for text entry, and the selection of different text-to-speech (TTS) engines depending on what participants were reading. Data collected with the consent of participants was anonymized before being uploaded to the cloud for analysis. Contact information, keys pressed, content of the TalkBack speech, and any details related to content consumed were removed.

The inspiration: The researchers were developing a tool to help novice users—mostly young children—learn TalkBack faster and in an intuitive manner. While developing the tool, they realized the details of how people who are blind interact with their smartphones to accomplish what they want were unclear. That lack of knowledge made it difficult to answer a series of important questions: What should the initial set of content they teach novice users be? Which TalkBack gestures are important? How do you use TTS effectively or best enter text? Before continuing with their work, they decided to understand expert users phone usage patterns first, as expert users tend to create efficient workarounds to gain maximum benefit of a system and overcome any limitations.

Areas of impact: The insights from this work can facilitate the design of smartphones that better meet the real-world needs of people who are blind. For example, from a software perspective, optimizing applications specifically for use with screen readers can begin to deliver a faster, more efficient phone experience. On the hardware front, standardization of specifications like touch screen sensitivity across devices could help ensure gesture styles learned and used on one phone transfer over to another. The work also supports the continued use of the tactile fingerprint scanner as an effective and trusted way for those who are blind to secure their devices. Additionally, users could benefit from an app to assist them in mastering TalkBack, perhaps a game-style tool in which new gestures and shortcuts could be introduced once a user reaches a certain skill level in using the screen reader.

Key takeaway: Expert users are an invaluable resource in understanding the real-world potential and limitations of a technology and ways to improve existing technologies. “Expertise” varies depending on the skill, but it’s useful for recruited experts to have continued experience over a long duration. For TalkBack, the researchers considered five-plus years of experience as expert.

The research team: Mohit Jain (opens in new tab) and Manohar Swaminathan (opens in new tab) of Microsoft Research India and former Microsoft Research contractor Nirmalendu Diwakar (opens in new tab).

Understanding the educational value of digital games in schools for the blind—a teachers’ perspective

In short: Work aims to discover which attributes of digital games for children teachers find useful and why and explores the perceived challenges in integrating them in schools for children who are blind in India.

A deeper dive: In “Teachers’ Perceptions around Digital Games for Children in Low-resource Schools for the Blind,” researchers partnered with Vision Empower, a nongovernmental organization in India that specializes in accessible resources for children who are blind, and together worked with seven schools for children who are blind in Karnataka, India. The team gauged the workload and skill sets of computer teachers via surveys, then invited the teachers to engage with existing accessible digital games during an informal and playful workshop designed to encourage them to think freely and creatively about the use of these types of games in their classrooms. Semi-structured interviews conducted afterward found teachers to be excited about these games’ potential to increase student engagement and support learning.

A woman and man both wearing headphones connected to a laptop on a small table in front of them are seated on a carpeted floor. The woman has her hands on the keyboard. Behind them, two women, one wearing headphones, are seated at a folding table, their attention on a laptop in front of them. A backpack and messenger bag rest on the floor beside them.
To understand how digital games can be used to enhance the classroom and learning experience for children who are blind or have low vision in India, researchers hosted a workshop for teachers at the Microsoft Research India lab. The informal nature of the event aimed to support teachers in exploring the games—and their potential use in classrooms—without limitations.

The inspiration: With STEM curricula in India’s early education system limited to students who aren’t blind or don’t have low vision, the country’s nearly 2 million children who are blind or have low vision are at high risk of being left out of the digital world and its myriad opportunities as they continue their education and choose career paths. This work takes the first steps to introducing computational thinking and digital skills to them at a young age.

Areas of impact: The teachers’ positive perceptions of these digital games have the potential to encourage school administrators in India to emphasize the introduction of digital skills to children who are blind or have low vision in early grade levels and with a play-based approach. The work also aims to advance research in understanding and enhancing the learning environments of children with disabilities in low-resource environments.

Key takeaway: This research highlights the need to work with key stakeholders—in this case, teachers—to understand their perceptions about the introduction of new technologies.

Play is fundamental: This work is part of the application of a new methodology called Ludic Design for Accessibility (opens in new tab) developed at Microsoft Research India that calls for technology design and application that is motivated by play and playfulness. The focus on digital games is to enable learning of multiple skills through play. A key design element in developing these games is the autonomy of players—children and teachers—to define all aspects of the play and the experience. Learn more about Ludic Design for Accessibility, the partners involved, and other projects leveraging the approach (opens in new tab).

The research team: Mohit Jain (opens in new tab) and Manohar Swaminathan (opens in new tab) of Microsoft Research India; Microsoft Research Fellow Gesu India (opens in new tab); former Microsoft Research contractor Nirmalendu Diwakar (opens in new tab); Vidhya Y (opens in new tab) and Aishwarya O (opens in new tab) of Vision Empower, Bangalore, India; and Aditya Vashistha (opens in new tab) of Cornell University.

More to explore

Locomotion inside virtual reality

As new and existing technologies progress and become a more regular part of our lives, new challenges and opportunities around accessibility and inclusion will present themselves. Virtual reality is a great example. Gains in software and hardware that lead to more immersive experiences only make the technology more attractive for use in professional and social scenarios. How can VR be designed to accommodate the variety of capabilities represented by those who want to use it?

At CHI, researchers presented Locomotion Vault (opens in new tab), an interactive database and analysis of VR locomotion techniques (LTs) being used across industry and academia. Not only was accessibility—defined by the authors as the “extent of motor ability that the LT requires”—an attribute considered during analysis, but this library of methods itself allows developers to include different locomotion options for users to choose from, making the VR experiences they’re creating more accessible to different people, says Principal Researcher Mar Gonzalez Franco (opens in new tab). To use the database or suggest a new locomotion method, visit the Locomotion Vault GitHub repo (opens in new tab). Locomotion Vault is a collaborative effort between Microsoft Research and researchers from the University of Copenhagen and the University of Birmingham. 

Capturing the remote work experience

Outside of CHI and very relevant is a pair of papers studying remote work as experienced by people with disabilities. In “Understanding the Telework Experience of People with Disabilities,” Principal Researcher John Tang interviewed people with a range of abilities who regularly work remotely. Participants described productivity challenges and opportunities around the use of telework tools and their sudden transition to interacting with an increased number of colleagues also working remotely because of the pandemic. The second paper focuses on the impact remote work during the pandemic has had on professionals who are neurodivergent, including people who have autism spectrum disorder, attention deficit hyperactivity disorder (ADHD), and dyslexia. Both papers identify design implications for how to make remote work technologies accessible to people of all abilities.  

The scope of capabilities in the world is expansive, and taking a more inclusive approach and broadening the idea of ability can open doors to some truly innovative advancements.

“They all require different kinds of creative ways of thinking about how we could use what people have and what their capabilities are now to more effectively allow them to do what they want to do and to live in the world,” says Senior Principal Research Manager Ed Cutrell.


Related publications

Continue reading

See all blog posts