{"id":1009260,"date":"2024-02-27T09:00:00","date_gmt":"2024-02-27T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1009260"},"modified":"2024-02-27T06:26:12","modified_gmt":"2024-02-27T14:26:12","slug":"structured-knowledge-from-llms-improves-prompt-learning-for-visual-language-models","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/structured-knowledge-from-llms-improves-prompt-learning-for-visual-language-models\/","title":{"rendered":"Structured knowledge from LLMs improves prompt learning for visual language models"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><strong><em>This research paper was presented at the <\/em><\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aaai.org\/aaai-conference\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong><em>38th Annual AAAI Conference on Artificial Intelligence<\/em><\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><strong><em> (AAAI-24), the premier forum for advancing understanding of intelligence and its implementation in machines.<\/em><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1.jpg\" alt=\"First page of the \"Learning Hierarchical Prompt with Structured Linguistic Knowledge for Language Models\" publication to the right of the AAAI conference on a blue and purple gradient background\" class=\"wp-image-1009434\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1.jpg 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<p>We\u2019re seeing remarkable abilities from visual language models in transforming text descriptions into images. However, creating high-quality visuals requires crafting precise prompts that capture the relationships among the different image elements, a capability that standard prompts lack. In our paper, \u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-hierarchical-prompt-with-structured-linguistic-knowledge-for-vision-language-models\/\">Learning Hierarchical Prompt with Structured Linguistic Knowledge for Language Models<\/a>,\u201d presented at AAAI-24, we introduce a novel approach using large language models (LLMs) to enhance the images created by visual language models. By creating detailed graphs of image descriptions, we leverage LLMs\u2019 linguistic knowledge to produce richer images, expanding their utility in practical applications.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"3353\" height=\"1315\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24.png\" alt=\"An example of three types of prompts used in VLM to recognize bird, which is  templated prompt (a photo of a bird), a natural language based prompt that descript the bird category, and a tree structured prompt highlight the key entities of birds and the corresponding attributes, such as beak, wings, etc.  \" class=\"wp-image-1009371\" style=\"width:700px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24.png 3353w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-300x118.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-1024x402.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-768x301.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-1536x602.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-2048x803.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure1_AAAI-24-240x94.png 240w\" sizes=\"auto, (max-width: 3353px) 100vw, 3353px\" \/><figcaption class=\"wp-element-caption\">Figure 1. A structured graph provides descriptions for each class name.<\/figcaption><\/figure>\n\n\n\n<p>Figure 1 illustrates our method for constructing a structured graph containing key details for each category, or class. These graphs contain structured information, with entities (objects, people, and concepts), attributes (characteristics), and the relationships between them. For example, when defining &#8220;water lily,&#8221; we include entities like &#8220;leaves&#8221; or &#8220;blooms&#8221;, their attributes, &#8220;round&#8221; and &#8220;white&#8221;, and then apply LLMs\u2019 reasoning capabilities to identify how these terms relate to each other.&nbsp;This is shown in Figure 2.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"2210\" height=\"2316\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24.png\" alt=\"The pipeline and instructions to autonomously generate category description and the knowledge graph with LLM. We first instruct the LLM to give a category description, and  then it is asked to parse the key entities, attributes and their relationships from the un-structured  description.\" class=\"wp-image-1009374\" style=\"width:700px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24.png 2210w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-286x300.png 286w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-977x1024.png 977w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-768x805.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-1466x1536.png 1466w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-1954x2048.png 1954w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure2_AAAI-24-172x180.png 172w\" sizes=\"auto, (max-width: 2210px) 100vw, 2210px\" \/><figcaption class=\"wp-element-caption\">Figure 2. With instructions fed into the LLM, we can receive category-related descriptions&nbsp;along with corresponding structured graphs.<\/figcaption><\/figure>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1141385\">\n\t\t\n\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/ai.azure.com\/labs\" aria-label=\"Azure AI Foundry Labs\" data-bi-cN=\"Azure AI Foundry Labs\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/06\/Azure-AI-Foundry_1600x900.jpg\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Azure AI Foundry Labs<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"azure-ai-foundry-labs\" class=\"large\">Get a glimpse of potential future directions for AI, with these experimental technologies from Microsoft Research.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/ai.azure.com\/labs\" aria-describedby=\"azure-ai-foundry-labs\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Azure AI Foundry Labs\" target=\"_blank\">\n\t\t\t\t\t\t\tAzure AI Foundry\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-model-structural-knowledge\">How to model structural knowledge<\/h2>\n\n\n\n<p>After identifying and structuring the relationships within the generated prompt descriptions, we implement Hierarchical Prompt Tuning (HTP), a new prompt-tuning framework that organizes content hierarchically. This approach allows the visual language model to discern the different levels of information in a prompt, ranging from specific details to broader categories and overarching themes across multiple knowledge domains, as shown in Figure 3. This facilitates the model&#8217;s understanding of the connections among these elements, improving its ability to process complex queries across various topics.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1802\" height=\"1655\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24.png\" alt=\"The overall framework of the proposed hierarchical prompt tuning.  Descriptions and relationship-guided graphs with class names are used as input for the frozen text encoder and the hierarchical prompted text encoder respectively. \" class=\"wp-image-1009377\" style=\"width:700px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24.png 1802w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24-300x276.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24-1024x940.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24-768x705.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24-1536x1411.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure3_AAAI-24-196x180.png 196w\" sizes=\"auto, (max-width: 1802px) 100vw, 1802px\" \/><figcaption class=\"wp-element-caption\">Figure 3. HPT is based on a dual-path asymmetric network, which receives images and various types of text inputs.<\/figcaption><\/figure>\n\n\n\n<p>Central to this method is a state-of-the-art relationship-guided attention module, designed to help the model identify and analyze the complex interconnections among elements within a graph. This module also understands the interactions between different entities and attributes through a cross-level self-attention mechanism. Self-attention enables the model to assess and prioritize various parts of the input data\u2014here, the graph\u2014according to their relevance. \u201cCross-level\u201d self-attention extends this capability across various semantic layers within the graph, allowing the model to examine relationships at multiple levels of abstraction. This feature helps the model to discern the interrelations of prompts (or input commands\/questions) across these various levels, helping it gain a deeper understanding of the categories or concepts.<\/p>\n\n\n\n<p>Our findings offer valuable insights into a more effective approach to navigating and understanding complex linguistic data, improving the model\u2019s knowledge discovery and decision-making processes. Building on these advances, we refined the traditional approach to text encoding by introducing a hierarchical, prompted text encoder, shown in Figure 4. Our aim is to improve how textual information is aligned or correlated with visual data, a necessity for vision-language models that must interpret both text and visual inputs.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"2178\" height=\"1921\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24.png\" alt=\"Frameowork of the hierarchical prompted text encoder, where we apply three types of prompts, low-level prompts, high-level prompts, and global-level prompts for hierarchical tuning, and design a relationship-guided attention module for better modeling structure knowledge. \" class=\"wp-image-1009380\" style=\"width:700px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24.png 2178w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-300x265.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-1024x903.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-768x677.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-1536x1355.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-2048x1806.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/figure4_AAAI-24-204x180.png 204w\" sizes=\"auto, (max-width: 2178px) 100vw, 2178px\" \/><figcaption class=\"wp-element-caption\">Figure 4. A hierarchical-prompted text encoder learns from multi-level prompts,&nbsp;with a relationship-guided attention module for modeling structural knowledge.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"looking-ahead\">Looking ahead<\/h2>\n\n\n\n<p>By incorporating structured knowledge into our model training frameworks, our research lays the groundwork for more sophisticated applications. One example is enhanced image captioning, where visual language models gain the ability to describe the contents of photographs, illustrations, or any visual media with greater accuracy and depth. This improvement could significantly benefit various applications, such as assisting visually impaired users. Additionally, we envision advances in text-to-image generation, enabling visual language models to produce visual representations that are more precise, detailed, and contextually relevant based on textual descriptions.<\/p>\n\n\n\n<p>Looking forward, we hope our research ignites a broader interest in exploring the role of structured knowledge in improving prompt tuning for both visual and language comprehension. This exploration is expected to extend the use of these models beyond basic classification tasks\u2014where models categorize or label data\u2014towards enabling more nuanced and accurate interactions between people and AI systems. By doing so, we pave the way for AI systems to more effectively interpret the complexities of human language.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"acknowledgements\">Acknowledgements<\/h2>\n\n\n\n<p>Thank you to Yubin Wang for his contributions in implementing the algorithm and executing the experiments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Using LLMs to create structured graphs of image descriptors can enhance the images generated by visual language models. Learn how structured knowledge can improve prompt tuning for both visual and language comprehension.<\/p>\n","protected":false},"author":37583,"featured_media":1009434,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1009260","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199560],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[815140],"related-projects":[],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Xinyang Jiang","user_id":41802,"display_name":"Xinyang Jiang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xinyangjiang\/\" aria-label=\"Visit the profile page for Xinyang Jiang\">Xinyang Jiang<\/a>","is_active":false,"last_first":"Jiang, Xinyang","people_section":0,"alias":"xinyangjiang"},{"type":"guest","value":"yubin-wang","user_id":"1009401","display_name":"Yubin Wang","author_link":"Yubin Wang","is_active":true,"last_first":"Wang, Yubin","people_section":0,"alias":"yubin-wang"},{"type":"user_nicename","value":"Dongsheng Li","user_id":39402,"display_name":"Dongsheng Li","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dongsli\/\" aria-label=\"Visit the profile page for Dongsheng Li\">Dongsheng Li<\/a>","is_active":false,"last_first":"Li, Dongsheng","people_section":0,"alias":"dongsli"},{"type":"guest","value":"cairong-zhao","user_id":"1009407","display_name":"Cairong Zhao","author_link":"<a href=\"https:\/\/vill-lab.github.io\/\" aria-label=\"Visit the profile page for Cairong Zhao\">Cairong Zhao<\/a>","is_active":true,"last_first":"Zhao, Cairong","people_section":0,"alias":"cairong-zhao"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-960x540.jpg\" class=\"img-object-cover\" alt=\"First page of the &quot;Learning Hierarchical Prompt with Structured Linguistic Knowledge for Language Models&quot; publication to the right of the AAAI conference on a blue and purple gradient background\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/02\/AAAI-BlogHeroFeature-1400x788-1.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xinyangjiang\/\" title=\"Go to researcher profile for Xinyang Jiang\" aria-label=\"Go to researcher profile for Xinyang Jiang\" data-bi-type=\"byline author\" data-bi-cN=\"Xinyang Jiang\">Xinyang Jiang<\/a>, Yubin Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dongsli\/\" title=\"Go to researcher profile for Dongsheng Li\" aria-label=\"Go to researcher profile for Dongsheng Li\" data-bi-type=\"byline author\" data-bi-cN=\"Dongsheng Li\">Dongsheng Li<\/a>, and <a href=\"https:\/\/vill-lab.github.io\/\" title=\"Go to researcher profile for Cairong Zhao\" aria-label=\"Go to researcher profile for Cairong Zhao\" data-bi-type=\"byline author\" data-bi-cN=\"Cairong Zhao\">Cairong Zhao<\/a>","formattedDate":"February 27, 2024","formattedExcerpt":"Using LLMs to create structured graphs of image descriptors can enhance the images generated by visual language models. Learn how structured knowledge can improve prompt tuning for both visual and language comprehension.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009260","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1009260"}],"version-history":[{"count":23,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009260\/revisions"}],"predecessor-version":[{"id":1010025,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009260\/revisions\/1010025"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1009434"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1009260"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1009260"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1009260"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1009260"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1009260"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1009260"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1009260"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1009260"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1009260"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1009260"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1009260"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}