Emotion and Personality in a Conversational Character
- J. Eugene Ball ,
- Jack Breese
Describes an architecture for constructing a character-based user interface using speech recognition and speech generation. The architecture uses models of emotions and personality encoded as Bayesian networks to (1) diagnose the emotions and personality of the user, and (2) generate appropriate behavior by an automated agent in response to the user’s interaction. Classes of interaction that can be interpreted and/or generated include such things as: (a) word choice and syntactic framing of utterances, (b) speech pace, rhythm and pitch contour, and (c) gesture, expression and body language. We also introduce the closely related problem of diagnosing the user’s task orientation (or level of focus) on the completion of a particular task (or set of tasks). This diagnosis is needed to appropriately control system-initiated interactions that could be distracting or inefficient in some situations.