Prediction
Pattern 1B: Use explanation (G11) patterns
Problem The user needs to understand what the system can do. Solution Provide explanations that enable users to gain insights into system capabilities. Explanations help user understanding because they expose relationships between system inputs and outputs (see G11 patterns). Use when How Use Guideline 11 patterns to explain why the system did what it did. […]
Pattern 2C: Report system performance information
Problem The user needs to form accurate expectations about how well the system can do what it can do. Solution Provide grounded information about how well the system can do what it can do. Use when How Collaborate with an AI/ML practitioner to collect information about: Performance information may cover overall system performance as well […]
Pattern 1C: Expose system controls
Problem The user needs to understand what the system can do. Solution Expose system capabilities through system controls. Use when How Use UI controls, options, menus, and settings to make the user aware of system capabilities. Use discoverability techniques that enable users to explore the interface and find system capabilities. User benefits Learn by doing: […]
Pattern 2A: Match the level of precision in UI communication with the system performance – Language
Problem The user needs to form realistic expectations about how well the system can do what it can do. Solution Communicate that the system is probabilistic and may make mistakes through intentional use of uncertainty in language. Use when How For system outputs and/or behaviors that are best qualified with language, match the words’ precision […]
Support efficient dismissal
Make it easy to dismiss or ignore undesired AI system services.
Scope services when in doubt
Engage in disambiguation or gracefully degrade the AI system’s services when uncertain about a user’s goals.
Remember recent interactions
Maintain short-term memory and allow the user to make efficient references to that memory.
Show contextually relevant information
Display information relevant to the user’s current task and environment.
Make clear how well the system can do what it can do
Help the user understand how often the AI system may make mistakes.
Learn from user behavior
Personalize the user’s experience by learning from their actions over time.
Convey the consequences of user actions
Immediately update or convey how user actions will impact future behaviors of the AI system.
Support efficient correction
Make it easy to edit, refine, or recover when the AI system is wrong.