How to better design AI – from ideation to user perception and acceptance
Designing artificial intelligence systems and features poses new challenges for user experience (UX) practitioners. Traditionally, UX designers rely on sketching and low-fidelity, fast prototyping to imagine and test their ideas. The usual design tools and techniques, however, can’t always meet the demands of designing AI-infused systems. For one, these systems are increasingly based on more abstract interactions like language understanding and conversation participation. They’re also intrinsically dynamic and unpredictable, changing over time as they learn and adapt to users, their tasks, and their contexts. For the latter reasons, AI systems often violate well-known usability principles, making updated guidance for designing human-AI interaction necessary. Additionally, designing for AI systems raises questions about what interaction design patterns work best and how they affect user perception.
Microsoft Research is presenting three papers at the ACM CHI Conference on Human Factors in Computing Systems (CHI) to address these needs and empower UX practitioners to design for AI.
Natural language processing—sketching the abstract
The paper “Sketching NLP: A Case Study of Exploring the Right Things to Design with Language Intelligence” gets to the heart of a fundamental design thinking method: sketching. Designers use various sketching techniques, such as storyboards, wireframes, and paper prototypes, to clarify problem spaces and rapidly explore tentative design alternatives. However, many of these standard sketching tools are based on tangible interactions like swiping or pointing and clicking, not more abstract ones. So imagine the difficulties that arise when, let’s say, you and your team are planning to use natural language processing (NLP) techniques to add intelligent writing assistance to a document-authoring application.
The paper identifies several of these challenges—including how to sketch language interactions abstractly so authors can have a better understanding of the possible outcomes and how to understand and stretch NLP’s technical limits and, within these limits, envision novel NLP applications. It also introduces a new format of wireframe that authors refer to as a notebook—a text-editor augmented with features for sketching tentative NLP-based design concepts.
Through a candid, behind-the-scenes glimpse into the product design process of a team of human-computer interaction (HCI) researchers and NLP scientists, the paper shows such a notebook can serve as a useful common ground for designers, NLP scientists, and potential users to communicate about proposed design concepts. The paper envisions a future when NLP-specific design tools will be as common as wireframes and paper prototypes are today.
Guidelines for human-AI interaction
Besides tools for ideation, designers often rely on principles and heuristics to guide design decisions and evaluate existing solutions. The paper “Guidelines for Human-AI Interaction” proposes a foundation for design guidance specific to AI.
While guidance on designing for AI abounds, the community lacked one unified, trustworthy set of guidelines for creating intuitive interactions between humans and AI systems. The authors synthesized more than two decades of research and thinking in this area into a set of guidelines that underwent three rigorous rounds of validation. The resulting 18 guidelines for human-AI interaction suggest how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time.
The guidelines can be used to evaluate ideas or systems following established interface inspection methods such as heuristic evaluation. They can also be used to ground ideation as teams imagine an AI-infused system’s capabilities. These guidelines are also intended as a basis for dialogue among the different disciplines involved in designing AI-infused systems, such as UX, engineering, data science, and management. The guidelines are available in a poster and printable cards—stop by the Microsoft booth during CHI to meet the authors and get a deck!
Shaping expectations of AI systems
Errors are inherent in AI-infused systems, yet most users don’t expect computers to behave inconsistently or imperfectly. The first two guidelines for human-AI interaction recommend setting expectations about an AI system’s capabilities and performance early on to prevent user frustration and dissatisfaction down the line. The paper “Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems” evaluates different techniques for adjusting user expectations to increase user acceptance.
The paper suggests three patterns for shaping expectations: stating an AI’s accuracy explicitly, explaining how the AI works, and allowing users to control the AI’s performance.
Through studies of these patterns implemented in an AI-infused scheduling assistant, the paper demonstrates their effectiveness in sustaining user acceptance and reveals differences in acceptance when the AI is tuned to avoid false positive errors versus when it is tuned to avoid false negative errors—even with the same overall accuracy. These results suggest the effort and resources required to address an error are a key factor in user acceptance of AI technology. The paper provides UX designers with evidence they can use to support decisions for how to set expectations about AI systems and specific patterns for doing so.
These three papers seek to assist UX designers in creating AI-powered experiences that are useful, usable, and frustration-free, and they advance scholarly knowledge in areas that have direct implications for how to design AI. We look forward to sharing them at CHI and hope that, if you attend the conference in Glasgow, Scotland, you stop by the Microsoft booth to meet the authors!