Microsoft Research Explores “Robots Among Us”
May 16, 2008
Eight recipients of the human-robot interaction request for proposal awards examine the growing role of robots in society

REDMOND, Wash., May 16, 2008 – The field of robotics is going through a transformation. As the technology steadily marches on, the practical side of what can be done with robots is evolving from automatic vacuums and human-operated devices toward much more complicated interactions between machines and human beings.

Moving the technology toward these so-called “social robots” are researchers in a variety of disciplines engaged in the growing field of human-robot interaction (HRI). To explore some of the challenges in realizing the potential of HRI, Microsoft Research launched the “Robots Among Us” request for proposals (RFP) last October with the bold declaration, “The robots are coming!”

Eight winners will receive a share of more than US$500,000 awarded under the program. Winning research proposals were selected from 74 submissions from academic researchers from 24 countries. The research projects explore a broad range of devices, technologies and functions as robots begin to work with and alongside human beings.

The RFP focuses on the general paradigm shift from “robots as tools” to “social robots,” and considers HRI in the context of other computing devices deployed in the modern human environment, such as PCs, smartphones and the worldwide web. Winners of this year’s Microsoft Research “Robots Among Us” RFP are as follows:

  • Snackbot: A Service Robot,” Jodi Forlizzi and Sara Kiesler, Carnegie Mellon University. Snackbot will roam the halls of two large office buildings at Carnegie Mellon University, selling (or in some cases, giving away) snacks and performing other services. Microsoft’s grant will help the team link its current robot prototype to the Web, e-mail, instant messaging and mobile services. The group will also deploy the robot in a field study to understand the uptake of robotic products and services.

  • “Human-Robot-Human Interface for an autonomous vehicle in challenging environments,” Ioannis Rekleitis and Gregory Dudek, McGill University, Canada.Utilizing Microsoft Robotics Studio, this group will work to provide an interface for controlling a robot operating on land and underwater, as well as a visualization tool for interpreting the visual feedback. The work will also create a new method for communicating with AQUA when a direct link to a controlling console is not available.

  • “Personal Digital Interfaces for Intelligent Wheelchairs,” Nicholas Roy,Massachusetts Institute of Technology.Using a Windows Mobile PDA outfitted with a remote microphone and speech processor, this group will create a single, flexible point of interaction to control wheelchairs. The project will address human-robot interaction challenges in how the spatial context of the interaction varies depending on the location of the wheelchair, the location of the hand-held device and the location of the resident. This project is part of an ongoing collaboration with a specialized care residence in Boston.

  • Human-Robot Interaction to Monitor Climate Change via Networked Robotic Observatories, Dezhen Song, Texas A&M University, and Ken Goldberg, University of California, Berkeley. This team will develop a new Human-TeleRobot system to engage the public in documenting climate change effects on natural environments and wildlife, and provide a testbed for study of Human Robot Interaction. To facilitate this, a new type of human-robot system will be built to allow anyone via a browser to participate in viewing and collecting data via the Internet. The Human Robot Interface will combine telerobotic cameras and sensors with a competitive game where “players” score points by taking photos and classifying the photos of others.

  • FaceBots: Robots utilizing and publishing social information in FaceBook, Nikolaos Mavridis and Tamer Rabie, United Arab Emirates University. The system to be developed by Mavridis and Rabie is expected to achieve two significant novelties: arguably being the first robot that is truly embedded in a social web, and being the first robot that can purposefully exploit and create social information available online. Furthermore, it is expected to provide empirical support for their main hypothesis — that the formation of shared episodic memories within a social web can lead to more meaningful long-term human-robot relationships.   

  • Multi-Touch Human-Robot Interaction for Disaster Response, Holly Yanco, University of Massachusetts. This group wants create a common computing platform that can interact with many different information systems, personnel from different backgrounds and expertise, and robots deployed for a variety of task in the event of a disaster. The proposed research intends to bridge the technological gaps through the use of collaborative tabletop multi-touch displays such as the Microsoft Surface. The group will develop an interface between the multi-touch display and Microsoft Robotics Studio to create a multi-robot interface for command staff to monitor and interact with all of the robots deployed at a disaster response.

  • Survivor Buddy: A Web-Enabled Robot as a Social Medium for Trapped Victims, Robin Murphy, University of South Florida.The main focus of this group is the assistance of humans who will be dependent on a robot for long periods of time. One function is to provide two-way audio communication between the survivor and the emergency response personnel. Other ideas are being studied, such as playing therapeutic music with a beat designed to regulate heartbeats or breathing. The idea is that a web-enabled, multi-media robot allows: 1) the survivor to take some control over the situation and find a soothing activity while waiting for extrication; and 2) responders to support and influence the state of mind of the victim.

  • Prosody Recognition for Human-Robot Interaction, Brian Scassellati, Yale University. This group will work to build a novel prosody recognition algorithm for release as a component for Microsoft Robotics Studio. Vocal prosody is the information contained in your tone of voice that conveys affect, and is a critical aspect to human-human interactions. In order to move beyond direct control of robots toward autonomous social interaction between humans and robots, the robots must be able to construct models of human affect by indirect, social means.

“Our goal is to accelerate HRI research so developers will have the tools and software they need to build robots that interact with humans in real-world environments, performing useful applications safely, effectively and efficiently,” says Stewart Tansley, senior research program manager, Microsoft Research. “And we are thrilled by the number of the winners’ emphases on healthcare, the environment, search and rescue and other areas of immediate societal value.”

The RFP is the latest in a series of robotics-related investments that Microsoft is making, including the Institute for Personal Robots in Education and Microsoft Robotics Studio. The Robotics Studio is a software development kit the company has made available since 2006, at no cost for academic and non-commercial use, and a number of the projects that submitted proposals will be using and extending this technology.

According to Tansley, the “Robots Among Us” RFP was unique in the field of HRI in its emphasis on the larger technology ecosystem surrounding robots working together with people.

“The particular thing that intrigued us about this field from a research point of view is that there’s an ecosystem of devices in the context of how robots and humans interact,” says Tansley. “We are interested to see how PCs, phones, the Internet and other common technologies come into play in the world of more socialized robots.”

One of this year’s winners, University of South Florida Professor Robin Murphy, heads up a project that has created companion robots for search and rescue operations. Echoing Tansley’s assertion, her team’s creation integrates a variety of media and communications technologies to keep survivors company between the time they’re located and when rescuers are able to extract them.

The project is a collaboration between the University of South Florida and Stanford University, which is investigating the way humans respond to various media.

“When people think of human-robot interaction, they usually either think of people behind the robot -- controlling it and interacting with the data being provided -- or they think about people being in front of the robot, seeing what it’s doing and watching its interaction with the world,” says Murphy. “This project is combining those two worlds. The robot is now the media between the person who’s trapped, and the rescuers, doctors, family members and larger world that the person wants to access. So we’re getting at both sides, and the robot is in the middle.”

Murphy’s robot addresses the problem of helping people trapped in mudslides, mine and building collapses, or other natural disasters to cope with the confinement and isolation while awaiting rescue. Prototypes for the robots have been put to use as far back as the La Conchita, Calif. mudslides in 1995, and more recently after Jacksonville, Fla.’s Berkman Plaza II garage collapse in December 2007.

“Between the time we locate a survivor and the time they are able to be extracted, several hours can elapse,” Murphy says. “We began thinking of the idea: What are you going to do for four to ten hours? What would we do? How do you keep people company? What would they want?”

The team’s robot comes with a screen, speakers and a good microphone, giving survivors the ability to talk to people, videoconference and access media such as news and music.

Part of the inspiration, she says, came from a 2006 accident in which a group of Australian miners were trapped, and requested some music while they awaited rescue.

“We hear that music in some cases will soothe people and calm them down,” added Murphy.” The Survivor Buddy gives the person the ability to communicate, as well as take some control of their environment.”

According to Murphy, the support from Microsoft Research will help her team explore the most effective ways for the robot to interact with victims and provide comfort as well as necessary services. The grant will help fund two graduate students, one at Stanford and one in Florida, to continue the group’s research into how humans and robots communicate and interact. In addition, there will be enough funding in the grant to build two of the Survivor Buddy robots and have them on call in case of a disaster.

“On one hand, we get a real world benefit of something we can use in a disaster,” she says. “And on the other hand, we move the science forward on how people react and interact with things that aren’t necessarily anthropomorphic. This frees up the grad students do things we’ve never been able to do before, because this kind of grant is hard to get. Microsoft is really at the cutting edge of making this kind of money available as the field of human robot interaction emerges.”

Read More: