Screen Reader Programmers in the Vibe Coding Era: Adaptation, Empowerment, and New Accessibility Landscape

CHI 2026 |

Generative AI agents are reshaping human-computer interaction, shifting users from direct task execution to supervising machine-driven actions, especially the rise of”vibe coding”in programming. Yet little is known about how screen reader programmers interact with AI code assistants in practice. We conducted a longitudinal study with 16 blind and low-vision programmers. Participants completed a GitHub Copilot tutorial, engaged with a programming task, and provided initial feedback. After two weeks of AI-assisted programming, follow-ups examined how their practices and perceptions evolved. Our findings show that code assistants enhanced programming efficiency and bridged accessibility gaps. However, participants struggled to convey intent, interpret AI outputs, and manage multiple views while maintaining situational awareness. They showed diverse preferences for accessibility features, expressed a need to balance automation with control, and encountered barriers when learning to use these tools. Furthermore, we propose design principles and recommendations for more accessible and inclusive human-AI collaborations.