{"id":701506,"date":"2020-10-28T10:28:05","date_gmt":"2020-10-28T17:28:05","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=701506"},"modified":"2025-03-03T08:58:55","modified_gmt":"2025-03-03T16:58:55","slug":"enabling-interaction-between-mixed-reality-and-robots-via-cloud-based-localization","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/enabling-interaction-between-mixed-reality-and-robots-via-cloud-based-localization\/","title":{"rendered":"Enabling interaction between mixed reality and robots via cloud-based localization"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><strong>As of November 2024, the Azure Spatial Anchors (ASA) service has been retired (see\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/updates?id=azure-spatial-anchors-retirement\" target=\"_blank\" rel=\"noopener noreferrer\">announcement<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>). The service and SDK are no longer available at the links below.<\/strong><\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Enabling interaction between mixed reality and robots via cloud-based localization\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/bhoNnqtte_M?feature=oembed&rel=0\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p><em>You are here.<\/em> We see some representation of this every day\u2014a red pin, a pulsating blue dot, a small graphic of an airplane. Without a point of reference on which to anchor it, though, <em>here <\/em>doesn\u2019t help us make our next move or coordinate with others. But in the context of an office building, street, or U.S. map, \u201chere\u201d becomes a location that we can understand in relation to other points. We\u2019re near the lobby; at the intersection of Broadway and Seventh Avenue; above Montana. A map and an awareness of where we are in it is important to knowing where <em>here <\/em>is and what we have to do to get <em>there<\/em>.<\/p>\n\n\n\n<p>The answer to \u201cWhere am I?\u201d is important to us as humans, but having this <em>spatial intelligence<\/em> is also a key capability for digital devices. Understanding where they are and what is around them lets them bridge the digital and physical worlds and use that digital information to do more in the real world. Mixed reality facilitates this connection between the digital and physical in a host of different ways\u2014from enabling the visualization of digital data over the real world to simulating interactions with virtual objects in a realistic way. Mixed reality devices such as Microsoft HoloLens and mixed reality\u2013capable mobile devices are able to build visual maps of their environments and recognize their place in them. Then using these maps, they\u2019re able to create and maintain holograms and other digital content in the correct places in the real world over time. Since the release of <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/spatial-anchors\/\">Azure Spatial Anchors<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a cloud-based localization service, the ability to localize to a space not only across time but also across devices has become more available and easier, making it possible for multiple people with different devices to localize to the same environment and see the same digital content persistently in the same place. We\u2019re excited to make available the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/azure_spatial_anchors_ros\">Azure Spatial Anchors Linux SDK<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a special research software release for mobile robot use cases. With the Azure Spatial Anchors Linux SDK, robots can now use Azure Spatial Anchors to localize and share information within this mixed reality ecosystem.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">SDK<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/microsoft\/azure_spatial_anchors_ros\" data-bi-cN=\"Azure Spatial Anchors Linux SDK ROS Wrapper\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Azure Spatial Anchors Linux SDK ROS Wrapper<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>The Azure Spatial Anchors Linux SDK is compatible with Ubuntu and the Robot Operating System (ROS) to easily enable the robotics research community to begin exploring novel applications of robotics that utilize mixed reality. Researchers can use the SDK, which allows robots with an onboard camera and a pose estimation system to access the service, to localize robots to the environment, to other robots, and to people using mixed reality devices, opening the door to better human-robot interaction and greater robot capabilities.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"670821\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: Microsoft research newsletter<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-label=\"Microsoft Research Newsletter\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/Newsletter_Banner_08_2019_v1_1920x1080.png\" alt=\"\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft Research Newsletter<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-newsletter\" class=\"large\">Stay connected to the research community at Microsoft.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button is-style-fill-chevron\">\n\t\t\t\t\t\t<a href=\"https:\/\/info.microsoft.com\/ww-landing-microsoft-research-newsletter.html\" aria-describedby=\"microsoft-research-newsletter\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft Research Newsletter\" target=\"_blank\">\n\t\t\t\t\t\t\tSubscribe today\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h3 class=\"wp-block-heading\" id=\"azure-spatial-anchors-how-it-works\">Azure Spatial Anchors: How it works<\/h3>\n\n\n\n<p>To create digital content in the places that users expect to see it and then keep it there over time, HoloLens and other mixed reality devices need to estimate how they\u2019re moving through the world using visual simultaneous localization and mapping (SLAM). By tracking salient feature points in a sequence of images from their onboard cameras and fusing that with inertial measurements, mixed reality devices can both estimate how they\u2019re moving and build a sparse <em>local <\/em>map of where these feature points are in 3D. Android and iOS mobile devices utilize the same type of visual SLAM algorithms\u2014via ARCore and ARKit, respectively\u2014to render augmented reality content on screen, and these algorithms produce the same kind of sparse maps as mixed reality devices.<\/p>\n\n\n\n<p>Azure Spatial Anchors (ASA) works by taking these sparse local maps from devices and matching them to larger, global maps in the cloud. In addition to the 3D feature points, these global maps consist of descriptors computed at the feature points, which enable devices to recognize that they\u2019re seeing the same features when they observe that spot again. When an individual captures feature points at a location in the world and adds those to global maps in the cloud, they define a coordinate system relative to the local map. This coordinate system allows mixed reality apps to attach spatial data to that physical place. We call this coordinate system a <em>spatial anchor <\/em>because it provides an anchor for digital content in the real world and enables this content to persist there over time.<\/p>\n\n\n\n<p>When a mixed reality device observes the same place in the world at some later time and the device queries ASA with a local map of the place, some of the feature points in the query map should match the ones in the cloud map, which allows ASA to robustly compute a relative six-degree-of-freedom pose for the device using these correspondences. Knowing the relative pose of the device to the anchor coordinate system enables all of the spatial data attached to that anchor to be displayed at the correct place in the physical world. If more than one mixed reality device localizes to an anchor at the same time, they can each visualize the same digital information but from their own perspective while looking at the scene. This effectively colocalizes the devices to each other indirectly by sharing the coordinate frame of the anchor.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"spatial-anchors-and-the-opportunities-they-present-for-robotics\">Spatial anchors\u2014and the opportunities they present\u2014for robotics<\/h3>\n\n\n\n<p>Mobile robots are solving the same problem as HoloLens and mixed reality\u2013capable mobile devices: estimating how they\u2014and their sensors\u2014are moving in a particular environment. This makes mobile robots a natural fit for ASA. Enabling robots to localize to spaces will offer them the ability to access data connected to Spatial Anchors. For example, a robot inspecting an industrial site could access information about a particular machine if it localizes to a spatial anchor next to the machine This ability becomes even more powerful in multi-robot scenarios. If two robots are localized to the same spatial anchor, they can automatically share a common reference frame without explicit colocalization such as tracking each other with fiducial markers. Several robots working in the same environment could be assigned tasks based on their locations; for example, order pickups in a warehouse could be determined based on which robot is closest to the desired inventory. In addition to utilizing spatial anchors, robots with the ASA Linux SDK can <em>create<\/em> them. Mapping an environment and populating it with spatial anchors can be automated using a robot with this SDK, improving efficiency and helping to expand and improve the global map in the cloud.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/Reduced_Linux_Gif-1.gif\" alt=\"Intern Oswaldo Ferro, using a HoloLens 2 device, has placed a spatial anchor, and now a robot with an onboard camera is able to localize to this anchor. With the HoloLens and robot both localized to the same anchor, they\u2019re effectively colocalized by sharing the anchor\u2019s coordinate system.\"\/><figcaption class=\"wp-element-caption\">Intern Oswaldo Ferro, using a HoloLens 2 device, has placed a spatial anchor, and now a robot with an onboard camera is able to localize to this anchor. With the HoloLens and robot both localized to the same anchor, they\u2019re effectively colocalized by sharing the anchor\u2019s coordinate system.<\/figcaption><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><\/p>\n\n\n\n<p>Enabling robots to colocalize with different types of devices, especially mixed reality devices and mixed reality\u2013capable devices, opens up new opportunities for research and innovation in human-robot interaction. We envision mixed reality as an important tool for robot spatial intelligence and autonomy, and our ambition is to unite humans and robots through mixed reality in ways that result in improved teamwork. In the same way that colocalization of two robots enables them to share spatial data and collaborate by having a common reference frame, robots colocalized with mixed reality devices can interact with contextual data in a way that both humans and machines can understand. This unlocks more intuitive interaction, such as individuals using a HoloLens employing a \u201ccome here\u201d gesture to call a robot over rather than having to teleoperate the robot or translate their position as a navigation goal to the robot\u2019s frame of reference.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/Gif_2_oswaldo_spot_video-2.gif\" alt=\"Once a robot is colocalized with other devices, human-robot interaction becomes much more natural because spatial information can be easily shared. Here, the robot and HoloLens 2 are colocalized through the same spatial anchor, and the HoloLens leverages its hand-tracking capabilities to recognize gestures such as \u201ccome here\u201d using research on action recognition from colleagues Federica Bogo and Jan St\u00fchmer (who has since left Microsoft) at the Mixed Reality and AI Lab in Z\u00fcrich. Since the robot and HoloLens share a coordinate system through ASA, the location of the HoloLens is directly understandable to the robot, and the \u201ccome here\" gesture triggers it to plan a path from its location to just in front of the HoloLens user.\"\/><figcaption class=\"wp-element-caption\">Once a robot is colocalized with other devices, human-robot interaction becomes much more natural because spatial information can be easily shared. Here, the robot and HoloLens 2 are colocalized through the same spatial anchor, and the HoloLens leverages its hand-tracking capabilities to recognize gestures such as \u201ccome here\u201d using research on action recognition from colleagues Federica Bogo and Jan St\u00fchmer (who has since left Microsoft) at the Mixed Reality and AI Lab in Z\u00fcrich. Since the robot and HoloLens share a coordinate system through ASA, the location of the HoloLens is directly understandable to the robot, and the \u201ccome here&#8221; gesture triggers it to plan a path from its location to just in front of the HoloLens user.<\/figcaption><\/figure>\n\n\n\n<p>The ASA Linux SDK is being released in two parts\u2014closed-source binaries and an open-source ROS wrapper\u2014and is targeting the Ubuntu 18.04 and 20.04 distributions. This release is for research use only and may not be used commercially. For HoloLens, Android devices, and iOS devices, applications handle pose estimation, via HoloLens head tracking, ARCore, and ARKit, respectively, as well as anchor localization. These pose estimation processes are tightly coupled with their ASA SDKs and so applications can only use the camera tracking system of their respective devices; individuals aren\u2019t free to run their own SLAM algorithms while using ASA. Because of the diversity of robot sensor configurations, for this new SDK, people need to provide the camera\u2019s pose independently via some other pose estimation process. This can happen directly, as in the case of a robot navigating with visual SLAM that localizes to an anchor with the same camera. Another example would be a LIDAR-based ground robot\u2014also equipped with a camera\u2014navigating in a 2D map and leveraging the transformation from the robot base to the camera to estimate the camera pose in the world frame. In addition to estimating the pose independently, the SDK requires a calibrated camera, as the user is also responsible for undistorting the images before they\u2019re provided to the SDK with their poses.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"see-the-asa-linux-sdk-in-action\">See the ASA Linux SDK in action<\/h3>\n\n\n\n<p>The capabilities of the Azure Spatial Anchors Linux SDK are being demonstrated as part of a tutorial at the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.iros2020.org\/\">2020 International Conference on Intelligent Robots and Systems (IROS)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. This year, the conference is using a virtual format featuring on-demand videos, which are available now. (The conference is also free to attend this year, so in addition to our tutorial, attendees will have access to all the papers, talks, and workshops.)<\/p>\n\n\n\n<p>The goal of our tutorial, <em>Mixed Reality and Robotics<\/em>, is to provide resources so those without prior mixed reality experience can integrate some mixed reality tools into their robotics research. The tutorial includes several conceptual talks about human-robot interaction through mixed reality and methods of colocalization, including with the ASA Linux SDK. Several demos include sample code and video walkthroughs. On the topic of human-robot interaction, we provide a sample mixed reality app for HoloLens and mobile devices that allows those using it to interact with a virtual robot running in a simulator on a local computer. Attendees will also learn how to use the ASA Linux SDK with prerecorded datasets in a colocalization demo. These demos are intended to be deployable with minimal prerequisite software or hardware, but we also provide instruction on how to adapt both of these demos to work with attendees\u2019 own robots. For those interested in the tutorial, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.iros2020.org\/ondemand\/signup\">register for free access to the IROS on-demand content<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. And check out our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/azure_spatial_anchors_ros\">GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for instructions on how to install the ASA Linux SDK yourself and leave us feedback. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>You are here. We see some representation of this every day\u2014a red pin, a pulsating blue dot, a small graphic of an airplane. Without a point of reference on which to anchor it, though, here doesn\u2019t help us make our next move or coordinate with others. But in the context of an office building, street, or U.S. map, \u201chere\u201d becomes a location that we can understand in relation to other points. We\u2019re near the lobby; at the intersection of Broadway and Seventh Avenue; above Montana. A map and an awareness of where we are in it is important to knowing where here is and what we have to do to get there.<\/p>\n","protected":false},"author":38838,"featured_media":701941,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Jeffrey Delmerico","user_id":"38562"},{"type":"user_nicename","value":"Helen Oleynikova","user_id":"38604"},{"type":"user_nicename","value":"Juan Nieto","user_id":"39760"},{"type":"user_nicename","value":"Marc Pollefeys","user_id":"36191"}],"msr_hide_image_in_river":null,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-701506","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[602418],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[727042],"related-events":[633120],"related-researchers":[{"type":"user_nicename","value":"Jeffrey Delmerico","user_id":38562,"display_name":"Jeffrey Delmerico","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\" aria-label=\"Visit the profile page for Jeffrey Delmerico\">Jeffrey Delmerico<\/a>","is_active":false,"last_first":"Delmerico, Jeffrey","people_section":0,"alias":"jedelmer"},{"type":"user_nicename","value":"Marc Pollefeys","user_id":36191,"display_name":"Marc Pollefeys","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\" aria-label=\"Visit the profile page for Marc Pollefeys\">Marc Pollefeys<\/a>","is_active":false,"last_first":"Pollefeys, Marc","people_section":0,"alias":"mapoll"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-960x540.jpg\" class=\"img-object-cover\" alt=\"hand with pixels around it for Mixed Reality\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-16x9.jpg 16w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_No_logo_Linux_Still_option2-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\" title=\"Go to researcher profile for Jeffrey Delmerico\" aria-label=\"Go to researcher profile for Jeffrey Delmerico\" data-bi-type=\"byline author\" data-bi-cN=\"Jeffrey Delmerico\">Jeffrey Delmerico<\/a>, Helen Oleynikova, Juan Nieto, and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\" title=\"Go to researcher profile for Marc Pollefeys\" aria-label=\"Go to researcher profile for Marc Pollefeys\" data-bi-type=\"byline author\" data-bi-cN=\"Marc Pollefeys\">Marc Pollefeys<\/a>","formattedDate":"October 28, 2020","formattedExcerpt":"You are here. We see some representation of this every day\u2014a red pin, a pulsating blue dot, a small graphic of an airplane. Without a point of reference on which to anchor it, though, here doesn\u2019t help us make our next move or coordinate with&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/701506","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=701506"}],"version-history":[{"count":19,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/701506\/revisions"}],"predecessor-version":[{"id":1133367,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/701506\/revisions\/1133367"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/701941"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=701506"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=701506"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=701506"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=701506"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=701506"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=701506"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=701506"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=701506"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=701506"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=701506"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=701506"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}