Pattern 9B: Rich and detailed edits

Problem The AI system produced an incorrect or partially incorrect result/output and the user needs to edit, correct, refine, or recover the system’s behavior. Solution Enable the user to modify the AI system’s output by editing, correcting, or refining it. Enable the user to edit all parts of the AI system’s output. Use when How […]

Pattern 9A: Switch classification decisions

Problem The AI system incorrectly classified an object and the user needs to edit, correct, refine, or recover the system’s behavior.​​​​​​​ Solution Enable users to correct the AI system by selecting between two different states for each system output. Use when How The AI classifies each item (e.g., important/not important; spam/not spam). Enable the user to correct […]

Pattern 2D: Provide low performance alerts

Problem The user needs to form realistic expectations about how well the system can do what it can do. Solution Alert the user of known or anticipated issues with system performance. Use when How Distill the most important information from G02-C: Report system performance information about the most probable low performance conditions. For example, identify: Collaborate with […]

Pattern 1E: Show a set of system outputs

Problem The user needs to understand what the system can do. Solution Show a set of system outputs for the user to choose from. Use when How Show a preview of the most probable system outputs, based on the current state and input. Select possible system outputs to display based on one or more considerations: […]

Pattern 1D: Demonstrate possible system inputs

Problem The user needs to understand what the system can do. Solution Show possible user inputs to demonstrate to the user what the system can do. Use when How Show possible user inputs in one or more of the following forms: Select possible user inputs to display based one or more considerations: User benefits Common […]

Pattern 2B: Match the level of precision in UI communication with the system performance – Numbers

Problem The user needs to form realistic expectations about how well the system can do what it can do. Solution Communicate that the system is probabilistic and may make mistakes through intentional use of precision in numeric measurements. Use when How For system outputs and/or behaviors that are qualified numerically, match precision of numbers used […]

Pattern 1B: Use explanation (G11) patterns

Problem The user needs to understand what the system can do. Solution Provide explanations that enable users to gain insights into system capabilities. Explanations help user understanding because they expose relationships between system inputs and outputs (see G11 patterns). Use when How Use Guideline 11 patterns to explain why the system did what it did. […]

Pattern 2C: Report system performance information

Problem The user needs to form accurate expectations about how well the system can do what it can do. Solution Provide grounded information about how well the system can do what it can do. Use when How Collaborate with an AI/ML practitioner to collect information about: Performance information may cover overall system performance as well […]

Pattern 1C: Expose system controls

Problem The user needs to understand what the system can do. Solution Expose system capabilities through system controls. Use when How Use UI controls, options, menus, and settings to make the user aware of system capabilities. Use discoverability techniques that enable users to explore the interface and find system capabilities. User benefits Learn by doing: […]

Pattern 2A: Match the level of precision in UI communication with the system performance – Language

Problem The user needs to form realistic expectations about how well the system can do what it can do. Solution Communicate that the system is probabilistic and may make mistakes through intentional use of uncertainty in language. Use when How For system outputs and/or behaviors that are best qualified with language, match the words’ precision […]

Support efficient dismissal

Make it easy to dismiss or ignore undesired AI system services.

Scope services when in doubt

Engage in disambiguation or gracefully degrade the AI system’s services when uncertain about a user’s goals.