Welcome to the Mixed Reality and Robotics Tutorial at IROS 2020. This year’s conference is using an on-demand, virtual format, which means that all of the content for this tutorial is available as streaming videos, with code samples to accompany the demos. However, the conference organizers have made registration FREE, so by signing up here, you can gain access to all of the talks and papers, as well as the workshops and tutorials (including this one). Please see the Agenda tab for more detailed information about the tutorial contents.
Once registered, the on-demand videos can be reached directly on the IROS On-Demand Platform.
To follow along with the colocalization tutorial, you will also need to register for the Azure Spatial Anchors Linux SDK which can be done at this Signup Link.
In order to view the tutorial videos, you will need to be registered for the IROS conference. However, to help us better understand the research interests of the audience, and to more easily contact IROS attendees who are interested in Mixed Reality, we would kindly ask that you click the link in the top left to register for this event. Registration for our tutorial is not binding, and is separate from the IROS conference registration. In order to access the content for this tutorial through the IROS On-Demand site, you will still need to register for the IROS conference.
Mixed, Augmented, and Virtual Reality offer exciting new frontiers in communication, entertainment, and productivity. A primary feature of Mixed Reality (MR) is the ability to register the digital world with the physical one, opening the door to a wide variety of robotics applications. This capability enables more natural human-robot interaction: instead of a user interfacing with a robot through a computer screen, we envision a future in which the user interacts with a robot in the same environment through MR, to see what it sees, to see its intentions, and seamlessly control it in its own representation of the world.
The purpose of this tutorial is to introduce the audience to both the high-level concepts of Mixed Reality and practically demonstrate how these concepts can be used to interact with a robot through an MR device. We will discuss how various hardware devices (mobile phones, AR/MR/VR headsets, and robots’ on-board sensors) can integrate with cloud services to create a digital representation of the physical world, and how such a representation can be used for co-localization. Participants will have a chance to create an iOS, Android, or Microsoft HoloLens 2 app to control and interact with a virtual robot, with instructions on how to adapt the sample code to a real robot, so attendees can start using Mixed Reality in their own robotics projects.