Mixed Reality and Robotics – Tutorial @ IROS 2020

Mixed Reality and Robotics – Tutorial @ IROS 2020

About

The IROS organizers have recently announced that the conference, including this tutorial, will use an on-demand virtual format, and the in-person meeting is cancelled.  We are in the process of adapting the content of this tutorial to this format, but our intention is to offer a program of pre-recorded videos covering Mixed Reality concepts for co-localization and robot interaction, and walkthroughs (with sample code) for several demo applications. 

Registration

In order to view the tutorial videos and obtain the sample code, you will need to be registered for the IROS conference.  However, to help us better understand the research interests of the audience, and to more easily contact IROS attendees who are interested in Mixed Reality, we would kindly ask that you click the link in the top left to register for this event.  Registration for our tutorial is not binding, and is separate from the IROS conference registration.  In order to access the content for this tutorial through the IROS On-Demand site, you will need to register for the IROS conference.

Abstract

Mixed, Augmented, and Virtual Reality offer exciting new frontiers in communication, entertainment, and productivity. A primary feature of Mixed Reality (MR) is the ability to register the digital world with the physical one, opening the door to a wide variety of robotics applications. This capability enables more natural human-robot interaction: instead of a user interfacing with a robot through a computer screen, we envision a future in which the user interacts with a robot in the same environment through MR, to see what it sees, to see its intentions, and seamlessly control it in its own representation of the world.

The purpose of this tutorial is to introduce the audience to both the high-level concepts of Mixed Reality and practically demonstrate how these concepts can be used to interact with a robot through an MR device. We will discuss how various hardware devices (mobile phones, AR/MR/VR headsets, and robots’ on-board sensors) can integrate with cloud services to create a digital representation of the physical world, and how such a representation can be used for co-localization. All participants will get a chance to create an iOS, Android, or Microsoft HoloLens 2 app to control and interact with a small mobile robot.

Workshop Organizers

Marc Pollefeys
Helen Oleynikova
Jeff Delmerico

Agenda

Tentative List of Topics to be Covered

This agenda is being adapted to the On-Demand model, and thus these topics are subject to change at any time before the tutorial.

We intend to cover both “big picture” ideas of Mixed Reality and how we envision that it will transform how we interact with robots, along with technical details on a few different ways to do co-localization to allow any Mixed or Augmented Reality device to share a coordinate frame with a robot, and finally a practical portion where we will introduce a few of the tools that are necessary to create full Mixed Reality experienced with robotics. The tutorial will culminate with several demos that attendees will be able to build and run on their own, and adapt to use with their own robots.

We will make all sample code and slides available open-source once the tutorial content goes live, so that all attendees will be able to build on the concepts presented here.

  • Mixed Reality as an intuitive bridge between robots and humans
  • MR, AR, VR, a brief overview of differences and sample devices
  • Co-localization with Mixed Reality devices
    • AR-tag-based
    • Vision-based
    • Shared-map-based
  • Azure Spatial Anchors
    • Technical introduction
    • How to use ASA to co-localize different devices
  • Writing and deploying phone and Hololens apps
    • Unity
    • ROS# and ROS bridge for interfacing with ROS
    • Azure Spatial Anchors SDK for localization
  • Demos!
    • Create an Azure Spatial Anchor and then localize to it later
    • Control a virtual robot in Mixed Reality with a mobile device or HoloLens

How to Prepare

Due to the now-virtual format of the tutorial, we intend to deploy docker containers so that the demos can be run as easily as possible by attendees.  We will provide resources to assist in adapting these solutions to your robots at home.  More information about the technical requirements to run the demos will be coming soon.

In the meantime, please register on the registration site! This is to help us get an estimate of how many people will use the course materials and will help us to share more information with attendees.