CHI 2013: an Immersive Event
Springtime in Paris this year sees the Association for Computing Machinery’s 31st Conference on Human Factors in Computing Systems (CHI) in full swing from April 27 through May 2, welcoming experts and students from more than 60 countries. A large contingent of researchers from Microsoft Research will be there to exchange ideas and deliver 27 papers and 12 notes covering a broad spectrum of human-computer interaction (HCI) topics, from natural user interfaces and digital arts to machine learning, health care, and predictive intelligence.
Two papers co-authored by Microsoft Research scientists have been honored with Best Paper awards. The first, IllumiRoom: Peripheral Projected Illusions for Interactive Experiences, was written by Brett Jones of the University of Illinois at Urbana-Champaign; Hrvoje Benko and Andy Wilson of Microsoft Research Redmond; and Eyal Ofek of Microsoft Research’s eXtreme Computing Group. The second, Weighted Graph Comparison Techniques for Brain Connectivity Analysis, was written by Basak Alper of the University of California, Santa Barbara; Nathalie Henry Riche of Microsoft Research Redmond; and Benjamin Bach, Tobias Isenberg, and Jean-Daniel Fekete of Inria.
Other highlights during CHI 2013 include awards honoring Microsoft Research personnel. Eric Horvitz, Microsoft distinguished scientist and managing co-director of Microsoft Research Redmond, will be inducted into the CHI Academy to join other individuals who have made a substantial impact on the HCI field. And retired Microsoft researcher George Robertson will receive the ACM SIGCHI Lifetime Achievement in Research Award for his influential contributions to the practice and understanding of human-computer interaction.
HCI is a research area in which diversity is essential and collaboration an asset, so it’s no surprise that many of the accepted papers from Microsoft Research feature interdisciplinary research teams and Microsoft product groups, nor that researchers have drawn on data, hardware, and software-engineering skills, as well as strong relationships with the academic community. The breadth of topics being presented make it clear that Microsoft Research is intent on reimagining the computing experience—on multiple fronts.
During this year’s CHI conference, when it comes to demonstrating the breadth of disciplines Microsoft Research has deployed to advance the state of the art in HCI, two dissimilar projects exemplify these efforts.
Illusions Create an Immersive Experience
IllumiRoom uses the area surrounding a display as surfaces for “peripheral projected illusions”—projected visualizations that enhance traditional viewing and gaming experiences. At first glance, this proof-of-concept augmented-reality system appears straightforward: create an immersive experience by integrating an off-the-shelf projector with a Kinect Sensor to extend the field of view (FOV) surrounding the display. It takes a few moments to realize that IllumiRoom is extrapolating from images on the screen: The projections are being rendered in real time in response to on-screen content. Delve into the research behind the project, and it turns out the IllumiRoom team not only solved immense technical problems, but also had to cope with creative challenges to blur the lines between the physical and the virtual.
This is exactly the sort of multidimensional challenge that intrigues Benko, a researcher with the Natural Interaction Research group. Over the years, Benko’s work in natural user interfaces has encompassed augmented reality, computational illumination, surface computing, new input form factors and devices, and touch and freehand gestural input.
“I’m fascinated with the ability of projectors and cameras to turn any object or any surface into an interactive experience,” he says. “There are many other ways of providing augmented-reality experiences, but projection-based augmented reality intrigues me because it is inherently ‘gear-free’ and, very importantly, the experience is sharable.”
A key requirement for IllumiRoom was the need to self-calibrate, enabling the system to work in any room. Using a Kinect Sensor, the researchers captured the color and geometry of the scene and developed a depth map of the space. From this first step, correspondences between the projector and Kinect’s color camera were transformed into 3-D points of reference that provided a careful calibration enabling a tight match between on-screen content and the physical environment.
The central challenge, though, was generating real-time, interactive illusions that would adapt automatically to any room, any display, and any game. The prototype can connect to existing game content even if the game’s source code is not available, and IllumiRoom can capture a game’s image by using a video-capture card, then employing optical-flow techniques to analyze images and drive the illusions.
“Ultimately, it’s the content which makes the experience shine,” Benko says. “We spent a lot of effort designing different ways to use content from games or movies specifically for this setup. We tried different approaches, from intercepting controller commands and using computer-vision techniques to recording new custom content. They were all challenging, and some worked better than others. Those were the lessons we needed to learn.”
The team then explored the design space of projected visualizations and succeeded not only in extending the FOV of a game, but also in selectively rendering scene elements to augment the appearance of the physical environment. The project tested a total of 11 illusions, from super-saturated colors for a ‘cartoon’ look, to de-saturated color schemes that created a ‘film noir’ look.
But why bother with so many illusions? This is where the work veers away from straight computational solutions and into the creative arena.
“It’s very common for a movie or a game to set the mood of the experience by featuring a certain palette of colors or a style of rendering,” Benko explains. “For example, Batman movies are predominantly dark, with almost a black-and-white quality that’s important to the story. Similarly, games have a signature look. What we wanted to achieve with illusions was to blur the line between your room and the experience you are watching on TV.
“So far, we’ve made the experience spill out into the room, but in the future, we’d like to make it much more interactive. For example, if a ball falls out of the screen, we’d love to be able to throw the ball back! Plus, we’d like to understand what new cinematic effects are possible when content is split over two connected screens.”
IllumiRoom remains in the prototype stage, with much additional research left to be done before it could be made available broadly, but the progress thus far has been gratifying, and the possibilities are intriguing.
New Ways of Thinking for Biologists
The motivation behind At the Interface of Biology and Computation is worlds apart from IllumiRoom. Yet the paper—co-authored by Nir Piterman of the University of Leicester; Alex Taylor, Samin Ishtiaq, Jasmin Fisher, Byron Cook, and Caitlin Cockerton of Microsoft Research Cambridge; Sam Bourton of QuantumBlack; and David Benque of the Royal College of Art—exemplifies once again the diversity of HCI and non-HCI skills that must come together to create a new breed of research tools, in this instance, for biologists. Bio Model Analyzer (BMA) is an extended, ongoing project staffed by a core team of eight members from biology, computing science, social science, design, and software engineering.
“It’s precisely the coming together of the indeterminate qualities of biology, on the one hand, and the logical and systematic properties of computing, on the other, that appeals to me,” says Taylor, a researcher with the Socio-Digital Systems group. “It’s where these seemingly separate worlds rub up against each other that the research becomes interesting. The frictions open up spaces for thinking and working differently to expand what scientists know and how they know it.”
At one level, BMA is a sketching tool that enables users to draw a biological system, such as a genetic regulatory network, by dragging and dropping cells and their contents, such as DNA and proteins, along with extracellular components and relationships onto a simple canvas. But BMA also uses sophisticated computational techniques to determine whether a cellular network can achieve stabilization—a state of equilibrium that makes it resistant to alterations induced by internal or external mechanisms.
The computational techniques underpinning BMA have been complete since 2011. The project’s current focus involves creation of a tool that is accessible to biologists and that complements their workflows. The team’s goal was to enable biologists with little to no programming expertise to use the tool. During the user-interface design phase, though, it became clear that representing the results of empirical experimentation was not the only requirement.
“This is really the rub of the research presented in the paper,” Taylor explains. “Tools like BMA are not simply about creating aesthetically appealing and usable interfaces. They necessarily demand very different ways of thinking about the phenomena under investigation. Adoption of computational techniques demands a very different, if not fundamentally different, way of thinking about biological problems.”
The paper presents designs aimed at easing the problems that can arise when certain computational techniques are applied in biology. It aims to show that there are good reasons for displaying the multiple perspectives that appear intrinsic to the new class of computational tools such as BMA—even though they can introduce conceptual tensions.
“Take time, for example,” Taylor says. “How long does it take for a cellular interaction to occur, and what is the sequence of events over time? But with certain kinds of computation techniques, such as those used in BMA, this kind of time is abstracted away. There exists instead a theoretically infinite and simultaneous number of states. Therefore, BMA doesn’t check whether something follows a particular pathway or sequence of events over time. Rather, it’s about whether all the possible states of a cellular system meet a predefined criterion. That demands a different approach from those of biologists and, hence, creates tension. We found in those tensions, however, opportunities for opening up different ways of knowing.
“User-interface design has been central to the work because it seems to have played a major role in how biologists might understand their problem and work out solutions. It’s the interface between the disciplines and the design of the interface that has profound implications for what is known in science and how it is known, hence the title of the paper.”
Reimagining the Computing Experience
Microsoft Research is pushing hard to look at new ways of making computing more seamlessly beneficial in our lives, whether through touch and gestures, mobile and wearable technology, or analysis of historical data that helps predict the future. Projects such as IllumiRoom create an immersive computing experience, while BMA brings data-based insights closer to biologists. These are just two projects that contribute to a wide-ranging body of work from Microsoft Research that seeks to reimagine how we interact with computing technologies—and proving that it’s possible.
It’s clear that no single field of endeavor can deliver on this vision. Rather, it will come through innovations and collaboration in multiple disciplines. It’s also clear, from the breadth and depth of experience represented by Microsoft Research during CHI 2013, that there is a strong, collective ambition to achieve the vision.