About: Be inspired by top design schools from around the world as they respond to the Design Expo 2016 Challenge. This year we are excited to again align with Faculty Summit and the 25th Anniversary of Microsoft Research.
About Design Expo
2016 Design Challenge:
Achieving Symbiosis and the Conversational User Interface (CUI)
Watch the Full video of Design Expo 2016 or view individual videos per school in the Participating Schools and Liaisons section.
The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
— J.C.R. Licklider, Man-Computing Symbiosis, 1960
Move ahead an astonishing 55 years. Licklider’s prerequisites (and more) exist for Symbiosis. A plethora of chatty bots are offering to do things for us in our messaging experiences. A series of personal agent services have emerged that leverage machines, humans or both to complete tasks for us (x.ai, Clara Labs, Fancy Hands, Task Rabbit, Facebook “M” to name a few) — the commanding interface is email, text or a voice call. WeChat is perhaps the most stunning example of the power of chat-driven UI to date, providing indispensable value to users through millions of verified services in an all-in-one system that allows you to do everything from grabbing a taxi, to paying the electric bill or sending money to a friend. Offerings such as Siri, Google Now and Cortana are also demonstrating value to millions of people, particularly on mobile form factors, where the conversational user interface (CUI) is often superior to the GUI. Clearly, the value of the CUI is not found simply in ‘speech’.
The CUI is more than just synthesized speech; it is an intelligent interface. It’s intelligent because it combines these voice technologies with natural-language understanding of the intention behind those spoken words, not just recognizing the words as a text transcription. The rest of the intelligence comes from contextual awareness (who said what, when and where), perceptive listening (automatically waking up when you speak) and artificial intelligence reasoning.
— Ron Kaplan, Beyond the GUI: it’s Time for a Conversational User Interface, Wired Magazine, March 2013
While in complete agreement with Kaplan’s statement — made a mere two years ago — it is the combination of the CUI, adaptive/learning sensor technologies, a rich personal profile, increasingly pervasive user agents and service bots (powered by machines, humans or both), as well as the ability to fluidly transact, that will enable the most fluid, powerful, and human computing experiences spanning digital and physical environments and form factors that we have ever before been able to design, and build, and from which we will all benefit. 2015 seems poised to be “The Year of the Conversational Bot” but we are still just scratching the surface of the Symbiosis Promise.
Design a product, service or solution that demonstrates the value and differentiation of the CUI. Your creation should demonstrate the best qualities of a symbiotic human-computer experience which features an interface designed to interpret human language and intent. Of course, language takes many forms – from speech, to text, to gesture, body language, and even thought. Your creation should clearly demonstrate foundational elements the CUI calls upon in order to delight people. It should meet a clear need and be extensible to wider applications. It may be near-term practical or blue sky, but the idea must be innovative, technically feasible, and have a realistic chance of adoption if instantiated. Of course, to deliver an optimal experience, much is implied – from data and identity permissions to cross-app agent and/or bot cooperation and coordination (first and third party); your design should minimally show awareness of these barriers or explore solutions to them.