{"id":933729,"date":"2023-04-10T18:23:03","date_gmt":"2023-04-11T01:23:03","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&#038;p=933729"},"modified":"2024-03-20T08:13:13","modified_gmt":"2024-03-20T15:13:13","slug":"gpt-models-meet-robotic-applications-long-step-robot-control-in-various-environments","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/gpt-models-meet-robotic-applications-long-step-robot-control-in-various-environments\/","title":{"rendered":"GPT Models Meet Robotic Applications: Long-Step Robot Control in Various Environments"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1024x576.jpg\" alt=\"diagram\" class=\"wp-image-935271\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1536x864.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-2048x1152.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-scaled-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-scaled-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide1_update-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">We have released practical prompts for ChatGPT to generate executable robot action sequences from multi-step human instructions in various environments.<\/figcaption><\/figure>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill-download\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/chatgpt_robot_manipulation_prompts.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a><\/div>\n\n\n\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/github.com\/microsoft\/ChatGPT-Robot-Manipulation-Prompts\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub<\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"introduction\">Introduction<\/h3>\n\n\n\n<p>Imagine having a humanoid robot in your household that can be instructed and demonstrated household chores without coding\u2014Our team has been developing such a system, which we call <em>Learning-from-Observation<\/em>.<\/p>\n\n\n\n<p>As part of our effort, we recently released a paper, <strong>&#8220;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/chatgpt-empowered-long-step-robot-control-in-various-environments-a-case-application\/\" target=\"_blank\" rel=\"noreferrer noopener\">ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application<\/a>,&#8221;<\/strong> where we provide a specific example of how OpenAI&#8217;s ChatGPT can be used in a few-shot setting to convert natural language instructions into a sequence of executable robot actions. Our prompts and source code for using them are open-source and publicly available at this <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/ChatGPT-Robot-Manipulation-Prompts\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>In fact, generating programs for robots from language is an attractive goal and has attracted research interest in the robotics research community; some of them are built on top of large language models such as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/chat.openai.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">ChatGPT<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. However, most of them were developed within a limited scope, hardware-dependent, or lack the functionality of human-in-the-loop. Additionally, most of these studies rely on a specific dataset, which requires data recollection and model retraining when transferring or extending them to other robotic scenes. <strong>From a practical application standpoint, an ideal robotic solution would be one that can be easily applied to other applications or operational settings without requiring extensive data collection or model retraining<\/strong>.<\/p>\n\n\n\n<p>In this paper, we provide a specific example of how ChatGPT can be used in a few-shot setting to convert natural language instructions into a sequence of actions that a robot can execute. In designing the prompts, we tried to ensure that they meet the requirements common to many practical applications while also being structured in a way that they can be easily customizable. The requirements we defined for this paper are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Easy integration with robot execution systems or visual recognition programs.<\/li>\n\n\n\n<li>Applicability to various home environments.<\/li>\n\n\n\n<li>The ability to provide an arbitrary number of natural language instructions while minimizing the impact of ChatGPT&#8217;s token limit.<\/li>\n<\/ul>\n\n\n\n<p>To meet these requirements, we designed input prompts to encourage ChatGPT to:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Output a sequence of predefined robot actions with explanations in a readable JSON format.<\/li>\n\n\n\n<li>Represent the operating environment in a formalized style.<\/li>\n\n\n\n<li>Infer and output the updated state of the operating environment, which can be reused as the next input, allowing ChatGPT to operate based solely on the memory of the latest operations.<\/li>\n<\/ul>\n\n\n\n<p>We provide a set of prompt templates that structure the entire conversation for input into ChatGPT, enabling it to generate a response. The user&#8217;s instructions, as well as a specific explanation of the working environment, are incorporated into the template and used to generate ChatGPT&#8217;s response. For the second and subsequent instructions, ChatGPT&#8217;s next response is created based on all previous turns of the conversation, allowing ChatGPT to make corrections based on its own previous output and user feedback, if requested. If the number of input tokens exceeds the allowable limit for ChatGPT, we adjust the token size by truncating the prompt while retaining the most recent information about the updated environment.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1024x576.png\" alt=\"Prompt flow\" class=\"wp-image-933756\" style=\"width:900px;height:506px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1536x864.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-2048x1152.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/04\/Slide2-1920x1080.png 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">The entire structure of the conversation that will be inputted into ChatGPT for generating a response.<\/figcaption><\/figure>\n\n\n\n<p>In our paper, we demonstrated the effectiveness of our proposed prompts in inferring appropriate robot actions for multi-stage language instructions in various environments. Additionally, we observed that ChatGPT&#8217;s conversational ability allows users to adjust its output with natural language feedback, which is crucial for developing an application that is both safe and robust while providing a user-friendly interface.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"integration-with-vision-systems-and-robot-controllers\">Integration with vision systems and robot controllers<\/h3>\n\n\n\n<p>Among recent experimental attempts to generate robot manipulation from natural language using ChatGPT, our work is unique in its focus on the generation of robot action sequences (i.e., &#8220;what-to-do&#8221;), while avoiding redundant language instructions to obtain visual and physical parameters (i.e., &#8220;how-to-do&#8221;), such as how to grab, how high to lift, and what posture to adopt. Although both types of information are essential for operating a robot in reality, the latter is often better presented visually than explained verbally. Therefore, we have focused on designing prompts for ChatGPT to recognize what-to-do, while obtaining the how-to-do information from human visual demonstrations and a vision system during robot execution.<\/p>\n\n\n\n<p>As part of our efforts to develop a realistic robotic operation system, we have integrated the proposed system with a learning-from-observation system that includes a speech interface <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/tee.23008\">[<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/tee.23008\" target=\"_blank\" rel=\"noopener noreferrer\">1<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/tee.23008\">]<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/tee.23523\" target=\"_blank\" rel=\"noopener noreferrer\">[2]<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a visual teaching interface <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2212.10787\" target=\"_blank\" rel=\"noopener noreferrer\">[3]<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a reusable library of robot actions <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2212.09242\" target=\"_blank\" rel=\"noopener noreferrer\">[4]<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and a simulator for testing robot execution <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2301.01382\" target=\"_blank\" rel=\"noopener noreferrer\">[5]<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. If you are interested, please refer to the respective papers for the results of robot execution. The code for the teaching interface is available at another <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/cohesion-based-robot-teaching-interface\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"705\" height=\"1024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-705x1024.jpg\" alt=\"graphical user interface, text, application\" class=\"wp-image-953862\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-705x1024.jpg 705w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-207x300.jpg 207w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-768x1115.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-1058x1536.jpg 1058w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae-124x180.jpg 124w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_1-64a5e00593cae.jpg 1207w\" sizes=\"auto, (max-width: 705px) 100vw, 705px\" \/><figcaption class=\"wp-element-caption\">An example of integrating the proposed ChatGPT prompts into a robot teaching system. The system breaks down natural language input instructions into a sequence of robot actions, and then obtains the necessary parameters for robot execution (i.e., how to perform the actions) by prompting a human to visually demonstrate each step of the decomposed action sequence. An example of integrating the proposed ChatGPT-empowered task planner into a robot teaching system. A teaching system that incorporates the task planner (indicated by the dashed box). Following task planning, the system asks the user to visually demonstrate the tasks in a step-by-step manner. Visual parameters are then extracted from this visual demonstration.<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"354\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a-1024x354.jpg\" alt=\"Human demonstration and robot execution\" class=\"wp-image-953865\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a-1024x354.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a-300x104.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a-768x266.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a-240x83.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/07\/blog_2-64a5e011a5a3a.jpg 1212w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">(Top) The step-by-step demonstration corresponding to the planned tasks. (Middle and Bottom) Execution of the tasks by two different types of robot hardware. We have been developing a reusable library of robot skills (e.g., grab, pick up, bring, etc.) for several robot hardware. To learn more about the skill library, refer to our <a href=\"https:\/\/arxiv.org\/pdf\/2212.09242.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">paper<\/a>.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"conclusion\">Conclusion<\/h3>\n\n\n\n<p>The main contribution of this paper is the provision and publication of generic prompts for ChatGPT that can be easily adapted to meet the specific needs of individual experimenters. The impressive progress of large language models is expected to further expand their use in robotics. We hope that this paper provides practical knowledge to the robotics research community, and we have made our prompts and source code available as open-source material on this <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/ChatGPT-Robot-Manipulation-Prompts\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"bibliography\">Bibliography<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>@ARTICLE{10235949,\n  author={Wake, Naoki and Kanehira, Atsushi and Sasabuchi, Kazuhiro and Takamatsu, Jun and Ikeuchi, Katsushi},\n  journal={IEEE Access}, \n  title={ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application}, \n  year={2023},\n  volume={11},\n  number={},\n  pages={95060-95078},\n  doi={10.1109\/ACCESS.2023.3310935}}<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"about-our-research-group\">About our research group<\/h3>\n\n\n\n<p>Visit our homepage: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/applied-robotics-research\/\">Applied Robotics Research<\/a><\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"learn-more-about-this-project\">Learn more about this project<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/interactive-learning-from-observation\/\">[homepage] Learning-from-Observation <\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9382750\" target=\"_blank\" rel=\"noopener noreferrer\">[paper] A Learning-from-Observation Framework: One-Shot Robot Teaching for Grasp-Manipulation-Release Household Operations<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2212.10787\" target=\"_blank\" rel=\"noopener noreferrer\">[paper] Interactive Learning-from-Observation through multimodal human demonstration<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2212.09242\" target=\"_blank\" rel=\"noopener noreferrer\">[paper] Learning-from-Observation System Considering Hardware-Level Reusability<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/pdf\/2301.01382.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">[paper] Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/applied-robotics-research\/articles\/gpt-models-meet-robotic-applications-co-speech-gesturing-chat-system\/\" target=\"_blank\" rel=\"noreferrer noopener\">[blog] GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System &#8211; Microsoft Research<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Imagine having a humanoid robot in your household that can be instructed and demonstrated household chores without coding\u2014Our team has been developing such a system, which we call Learning-from-Observation. As part of our effort, we recently released a paper, &#8220;ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application,&#8221; where we provide a specific [&hellip;]<\/p>\n","protected":false},"author":39916,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":true,"_classifai_error":"","msr-content-parent":668253,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-933729","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":668253,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/933729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39916"}],"version-history":[{"count":42,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/933729\/revisions"}],"predecessor-version":[{"id":1016811,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/933729\/revisions\/1016811"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=933729"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=933729"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=933729"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=933729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}