Weaving Sound Information to Support Deaf and Hard of Hearing People’s Real-time Sensemaking of Auditory Environments: Co-designing with a DHH User
- Jeremy Zhengqi Huang ,
- Jaylin Herskovitz ,
- Liang-Yuan Wu ,
- Cecily Morrison ,
- Dhruv Jain
2025 CHI Conference on Human Computer Interaction |
Current AI sound awareness systems can provide deaf and hard of hearing people with information about sounds, including discrete sources and transcriptions. However, synthesizing AI outputs based on DHH people’s ever-changing intents to facilitate their sensemaking of complex auditory environments remains a challenge. In this paper, we describe the co-design process of SoundWeaver, a sound awareness system prototype that dynamically weaves AI outputs from different AI models based on users’ intents and presents synthesized information through a heads-up display. Adopting a Research through Design perspective, we created SoundWeaver with one DHH co-designer, adapting it to his different personal contexts and goals (e.g., cooking at home and chatting in the game store). Through this process, we present design implications for the future of “intent-driven” AI systems for sound accessibility.