UIST Showcases Novel Interfaces


By Janie Chang, Writer, Microsoft Research

Hallway conversations at UIST 2010 can sound like planning discussions for science-fiction-movie special effects, buzzing with terms such as “wearable computing,” “augmented reality,” and “smart rooms.”

UIST, the Association for Computing Machinery’s (ACM’s) Symposium on User Interface Software and Technology, brings together researchers and practitioners of new user interfaces. Taking place in New York City from Oct. 3 to 6, the event features presentations and demos that cover topics ranging from traditional graphical and web user interfaces to virtual and augmented reality to tools that assist the motor-impaired. Sponsored by the ACM’s special-interest groups on computer-human interaction and computer graphics, UIST is an opportunity for attendees to stay current with the latest breakthroughs in user-interaction techniques.

Mixing Real and Virtual Worlds

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

Significant hardware technology advances in recent years have brought radical changes to the types of research projects on natural user interfaces (NUIs) attendees are seeing during UIST. One of the many intriguing projects coming out of Microsoft Research Redmond’s Adaptive Systems and Interaction group is being presented by Andy Wilson, senior researcher, and Hrvoje Benko, researcher, in a paper titled Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces. The paper describes the interactions and algorithms unique to LightSpace, a small “smart room,” instrumented with depth cameras and projectors, that has been designed to explore a variety of computational strategies related to interactive displays and the space they inhabit.

LightSpace configuration

LightSpace configuration: All projectors and cameras are suspended at a central location above the users, leaving the rest of the space open and configurable.

Latest-generation depth cameras simplify traditionally difficult computer-vision problems and enables inexpensive real-time 3-D modeling of surface geometry. In LightSpace, cameras and projectors are calibrated to 3-D real-world coordinates, enabling projection of graphics onto any surface visible by both camera and projector, including floors, walls, furniture, and people.

Benko and Wilson used LightSpace to study how depth cameras enable new interactive experiences such as transferring projected objects from one display to another by simultaneously touching the object and the destination display. Or a user can “pick up” the object by sweeping it into a hand, see a representation of the object in the hand while walking to an interactive wall display, and “drop” the object onto the wall by touching it with the other hand.

Picking up objects from a table is accomplished by swiping them into a hand. The object—in this case, a red ball—is then represented visually with an icon in the hand.

Picking up objects from a table is accomplished by swiping them into a hand. The object—in this case, a red ball—is then represented visually with an icon in the hand.

The research is a continuation of many previous projects in which the pair combined projectors and cameras to provide interactivity in space without having to wear tracking gear or displays. The LightSpace project began in January 2010, and a first working demonstration of their “smart room” was presented in March during TechFest 2010, Microsoft Research’s annual technology showcase for Microsoft employees.

“Andy and I are fascinated,” Benko says, “with the idea that all the intelligence in the room actually just consists of smart projectors and cameras that simulate various interactive surfaces and provide connectivity between them. We treat our projector-camera units as smart lighting that can sense the user and the user’s actions, as well as provide information on whatever surface the user desires. One of the most fascinating aspects of LightSpace is that, in the absence of the available surface, in mid-air, we use the human body as a canvas to project information.”

through-body transition

LightSpace enables users to move an image from one surface to another using through-body transition.

Wilson already sees a number of future directions exploring the range of new interactions that LightSpace enables.

“What other kinds of interactions can be embedded in 3-D space?” he speculates. “What if users were able to reconfigure their workspace, moving tables and wall surfaces into place as needed? LightSpace could find and track these interactive surfaces on the fly.

“Or think about conference-room settings. Imagine sitting down to a meeting and having all your relevant documents appear before you, then sharing them with other meeting participants effortlessly with a quick flicking gesture, or opening up a document on a wall by touching the document on the table and pointing at a spot on the wall, rather than fumbling with video cables.”

When asked whether LightSpace could be the early stages of a Star Trek holodeck, Wilson laughs out loud.

“We would be lying if we said that science fiction had no influence on our work!” he says. “Yes, the holodeck is a great vision, but until we have room-sized holographic video, fantastic wireless tactile displays, and all the artificial intelligence implied by the holodeck, we need to carefully design our interactive systems with current technical limitations in mind. But you know, designing around technical limitations is all part of the fun and challenge, and the solutions can be quite inventive!”

Productive Collaboration

Microsoft Research has a long history of external collaboration, and this year’s contributions to the conference reflect this ongoing commitment, as evidenced in the paper Jogging over a Distance between Europe and Australia, a collaboration between Florian “Floyd” Mueller at Australia’s University of Melbourne and Scotland’s Distance Lab; Frank Vetere and Martin R. Gibbs from the University of Melbourne; Darren Edge, researcher at Microsoft Research Asia; Stefan Agamanolis of Distance Lab; and Jennifer G. Sheridan from London Knowledge Lab. The paper is one of four in UIST 2010 that reflect joint work between Microsoft Research and external collaborators.

The Jogging over a Distance paper is result of Mueller’s ongoing Ph.D. work, which led to collaboration with Edge and Microsoft Research Asia when he interned at that facility after winning a Microsoft Research Asia fellowship.

“The fellowship,” Mueller observes, “gave me the opportunity to draw on the insights of world leaders in the field, and my ongoing collaboration with Darren Edge has provided a new depth of understanding and focus on the elements that design plays in exertion interfaces. The goal of this research is to bring people in different locations closer together through the power of sports. Teaming up with Microsoft Research Asia also allowed me to better understand the cross-cultural implications such research can have in bridging distances between countries.”

Microsoft has been a longtime supporter of UIST, and this year is no exception. The company is a symposium sponsor, and the Microsoft Applied Sciences Group provided a set of “adaptive keyboards” to students who entered the UIST 2010 Student Innovation Contest. Program participation is also strong, with six of the 38 accepted papers this year authored or co-authored by Microsoft Research staff members. This reflects the company’s commitment to bringing natural user interfaces into mainstream products such as Kinect, with the vision of building an entirely new breed of applications that will change the nature of software interfaces the way graphical user interfaces changed the market more than 20 years ago.

Microsoft Research Contributions to UIST 2010

A Framework for Robust and Flexible Handling of Inputs with Uncertainty Julia Schwarz, Carnegie Mellon University; Scott Hudson, Carnegie Mellon University; and Andy Wilson, Microsoft Research Redmond.

Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces Andy Wilson, Microsoft Research Redmond; and Hrvoje Benko, Microsoft Research Redmond.

Content-Aware Dynamic Timeline for Video Browsing Suporn Pongnumkul, University of Washington; Jue Wang, Adobe; Gonzalo Ramos, Microsoft; and Michael Cohen, Microsoft Research Redmond.

Gestalt: Integrated Support for Implementation and Analysis in Machine Learning Processes Kayur Patel, University of Washington; Naomi Bancroft, University of Washington; Steven Drucker, Microsoft Research Redmond; James Fogarty, University of Washington; Amy Ko, University of Washington; and James Landay, University of Washington.

Jogging over a Distance between Europe and Australia Florian “Floyd” Mueller, University of Melbourne and Distance Lab; Frank Vetere, University of Melbourne; Martin R. Gibbs, University of Melbourne; Darren Edge, Microsoft Research Asia; Stefan Agamanolis, Distance Lab; and Jennifer G. Sheridan, London Knowledge Lab.

Pen + Touch = New Tools Ken Hinckley, Microsoft Research Redmond; Koji Yatani, Microsoft Research and the University of Toronto; Michel Pahud, Microsoft Research Redmond; Nicole Coddington, Microsoft; Jenny Rodenhouse, Microsoft; Andy Wilson, Microsoft Research Redmond; Hrvoje Benko, Microsoft Research Redmond; and Bill Buxton, Microsoft Research.

Continue reading

See all blog posts