The tech industry is being called upon to ensure our AI technologies are responsibly developed and deployed. Yet many organizations that create AI technologies report being unprepared to address AI risks and failures.
To meet these challenges, Microsoft is striving to take a human-centered approach to AI, designing and building technologies that benefit people and society while mitigating potential harms. This includes understanding human needs and using that insight to drive development decisions from beginning to end.
The HAX team brings together leaders in human-computer interaction, AI, and software engineering to solve the socio-technical and human factors problems necessary to empower the creation of responsible human-AI experiences.
Break it down for me please…
- By “responsible” we mean adhering to Microsoft’s responsible AI principles including safety, reliability, fairness, accountability, and transparency.
- By “empower” we mean helping AI creators (including designers, user researchers, project managers, data scientists and model builders, and engineers), interdisciplinary teams, and organizations through new practices and tools.
- By “creation” we mean enabling and/or accelerating the everyday work needed to build responsible AI-driven technologies.
- By “human-AI experiences” we mean focusing on how people and society will ultimately use or interact with AI in their everyday lives. Creating responsible human-AI experiences requires technical and socio-technical advances that touch every aspect of AI-driven technologies including their interfaces, models, data, and underlying systems.
How does the team fit into the larger responsible AI ecosystem at Microsoft?
The HAX team is an outgrowth of the Aether Human-AI Interaction & Collaboration (HAIC) Working Group. Aether is Microsoft’s advisory group on AI Ethics and Effects in Engineering and Research. In collaboration with researchers at MSR and practitioners across the company, the HAIC Working Group created the Guidelines for Human-AI Interaction and the HAX Toolkit, a set of practical tools for creating human-AI experiences including the guidelines.
Building off of this earlier work, MSR’s HAX team was created to scale up Microsoft’s research efforts to advance the state-of-the-art in responsible human-AI experiences and tooling. The HAX team complements and collaborates with related groups at MSR including our sister group ASI (innovating in responsible AI algorithms, capabilities, and tooling) and the FATE group (advancing our understanding of AI’s impact on society with a special focus on tools and technologies for fairness, accountability, and transparency). We also continue to contribute to Microsoft’s responsible AI initiative along with our partners in Aether, the Office of Responsible AI, and folks across Microsoft’s company-wide responsible AI ecosystem.