Principles of Mixed-Initiative User Interfaces

Proceedings of CHI '99, ACM SIGCHI Conference on Human Factors in Computing Systems, Pittsburgh, PA, ACM Press. |

Recent debate has centered on the relative promise of focusing user-interface research on developing new metaphors and tools that enhance users’ abilities to directly manipulate objects versus directing effort toward developing interface agents that provide automation. In this paper, we review principles that show promise for allowing engineers to enhance human-computer interaction through an elegant coupling of automated services with direct manipulation. Key ideas will be highlighted in terms of the Lookout system for scheduling and meeting management.

Information Agents: Directions and Futures (2001)

In this internal Microsoft video, produced in 2001 and released publicly in 2020, research scientist Eric Horvitz provides glimpses of a set of research systems developed within Microsoft’s research division between 1998 and 2001. Projects featured in the video include Priorities, Lookout, Notification Platform, DeepListener, and Bestcom. The projects show early uses of machine learning, perception, and reasoning aimed at supporting people in daily tasks and at making progress on longer-term missions of augmenting human intellect. The efforts are thematically related in their pursuit of broader understandings of people and context, including a person’s attention, goals, activities, and location, via multimodal signals, involving the analysis of multiple streams of information. Several of the prototype systems were built within the Attentional User Interface (AUI) project, which…

Lookout System: National Television Commercial (1998)

The Lookout project was an early exploration of the promise of machine learning, mixed-initiative interaction, and multimodal interaction as a foundation for intelligent services. In this widely broadcast, fast-paced 1998 television commercial, project lead Eric Horvitz demonstrates several capabilities of the system, including Lookout’s ability to recognize intentions, to decide when to perform dialog, to identify the best time to intervene, and to take actions to help with daily activities. The system brought together machine learning, decision making under uncertainty, speech recognition, natural language processing, dialog, and use of animations to relay gestures, such as the communication of system confusion. Lookout learned to recognize the intentions of a user (e.g., to schedule an appointment based on the message at focus of attention) via “streaming supervision”…