{"id":1027098,"date":"2024-05-07T09:00:00","date_gmt":"2024-05-07T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/loftq-reimagining-llm-fine-tuning-with-smarter-initialization\/"},"modified":"2024-05-01T07:52:24","modified_gmt":"2024-05-01T14:52:24","slug":"loftq-reimagining-llm-fine-tuning-with-smarter-initialization","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/loftq-reimagining-llm-fine-tuning-with-smarter-initialization\/","title":{"rendered":"LoftQ: Reimagining LLM fine-tuning with smarter initialization"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><strong><em>This research paper was presented at the <\/em><\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/iclr.cc\/Conferences\/2024\" target=\"_blank\" rel=\"noopener noreferrer\"><strong><em>12<sup>th<\/sup> International Conference on Learning Representations<\/em><\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><strong><em> (ICLR 2024), the premier conference dedicated to the advancement of deep learning.<\/em><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1401\" height=\"788\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1.png\" alt=\"Teal background with ICLR logo on the right (head and face) with LoftQ paper on the right.\" class=\"wp-image-1027119\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1.png 1401w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1280x720.png 1280w\" sizes=\"auto, (max-width: 1401px) 100vw, 1401px\" \/><\/figure>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/loftq-lora-fine-tuning-aware-quantization-for-large-language-models\/\" data-bi-cN=\"LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Large language models (LLMs) use extensive datasets and advanced algorithms to generate nuanced, context-sensitive content. However, their development requires substantial computational resources. To address this, we developed LoftQ, an innovative technique that streamlines the fine-tuning process\u2014which is used to adapt pre-trained language models to perform well in specialized applications, such as analyzing medical documents. During fine-tuning, the model undergoes additional training on a smaller, task-specific dataset. This results in improved performance, such as more accurate predictions, better understanding of domain-specific language, and more relevant responses in the context of the specialized area.<\/p>\n\n\n\n<p>LoftQ\u2019s strength lies in its ability to combine quantization and adaptive initialization during fine-tuning. Quantization reduces the precision of model parameters, lowering memory and computation needs. This not only accelerates processing but also reduces power consumption. Adaptive initialization closely aligns the model\u2019s parameters to its optimal pre-trained state, preserving its capabilities while minimizing resource use. Our paper, \u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/loftq-lora-fine-tuning-aware-quantization-for-large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models<\/a>,\u201d presented at ICLR 2024, details how this method can help make AI technologies more efficient and sustainable.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-loftq-works\">How LoftQ works&nbsp;<\/h2>\n\n\n\n<p>LoftQ builds on the principles of <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/lora-low-rank-adaptation-of-large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">LoRA<\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/artidoro\/qlora\" target=\"_blank\" rel=\"noopener noreferrer\">QLoRA<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. LoRA is a method that greatly reduces the number of parameters needed for training, decreasing the memory requirements for fine-tuning. QLoRA is a fine-tuning approach that uses 4-bit quantized, frozen weights and low rank adapters, significantly reducing memory requirements while maintaining high performance. This is illustrated in Table 1, which shows the amount of memory needed for fine-tuning an LLM with 7 billion parameters as well as the memory requirements for LoRA and QLoRA. LoRA achieves a fourfold reduction in memory usage, and QLoRA further reduces it by twofold.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"3053\" height=\"1210\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1.png\" alt=\"LoftQ - Table 1: This table shows the GPU memory usage for a 7-billion parameter LLM, with the following configurations: full fine-tuning on the left, LoRA in the middle, and QLoRA on the right.\" class=\"wp-image-1029312\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1.png 3053w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-300x119.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-1024x406.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-768x304.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-1536x609.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-2048x812.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/finetuning-1-240x95.png 240w\" sizes=\"auto, (max-width: 3053px) 100vw, 3053px\" \/><figcaption class=\"wp-element-caption\">Table 1: This table shows the GPU memory usage for a 7-billion parameter LLM with the following configurations: full fine-tuning on the left, LoRA in the middle, and QLoRA on the right.<\/figcaption><\/figure>\n\n\n\n<p>Unlike LoRA, QLoRA comes with a tradeoff, where some quality of the pretrained model is sacrificed due to the quantization of weights. LoftQ recognizes this and optimizes the initialization of quantization and low-rank adaptation matrices. That is, LoftQ seeks to identify a combination of a quantized matrix and a low rank matrix such that their sum closely approximates the original pretrained weight. This is done for every matrix that would be adapted in the model.<\/p>\n\n\n\n<p>The LoftQ algorithm alternates between two primary steps. First it quantizes (simplifies) the weights, and then it finds the best low-rank factors that approximate the quantization between the pretrained weight and the low-rank weight.&nbsp;The process repeats for a few steps. This method enables the fine-tuning process to start from a more effective initial state, which preserves accuracy while using less computational power and much more simplified weights.<\/p>\n\n\n\n<p>LoftQ requires a one-time setup to simplify and prepare these weights, allowing a fixed portion of the model\u2019s parameters (e.g., 5 percent) to be adjusted. Once established, this configuration can be repeatedly applied as the model transitions between various tasks and settings. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"evaluating-loftq\">Evaluating LoftQ&nbsp;<\/h2>\n\n\n\n<p>Tests using various types of LLMs, including those with different combinations of encoding and decoding capabilities like the Llama-2, show that models initialized with LoftQ consistently achieve strong performance, often matching or surpassing those configured with QLoRA.<\/p>\n\n\n\n<p>In practical terms, comparing the performance of LoftQ and QLoRA on different tasks using the Llama-2 model family yields distinct results, which are highlighted in Table 2. For the WikiText-2 dataset, which measures the model\u2019s perplexity (lower is better), and the GSM8K dataset, which tests the model\u2019s ability to solve basic math problems (higher is better), we demonstrate the effectiveness of varying degrees of weight simplification\u2014averaging 3, 2.5, and 2.25 bits per weight. Our <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/loftq-lora-fine-tuning-aware-quantization-for-large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">paper<\/a> discusses the results in more detail.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1605\" height=\"1196\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2.png\" alt=\"LoftQ - Table 2. This table compares LoftQ and QLoRA during the fine-tuning of two Llama-2 models on the Wikitext-2 and GSM8K datasets.\" class=\"wp-image-1027116\" style=\"width:731px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2.png 1605w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-300x224.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-1024x763.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-768x572.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-1536x1145.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-80x60.png 80w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ_llama2_table2-240x180.png 240w\" sizes=\"auto, (max-width: 1605px) 100vw, 1605px\" \/><figcaption class=\"wp-element-caption\">Table 2. This table compares LoftQ and QLoRA during the fine-tuning of two Llama-2 models on the Wikitext-2 and GSM8K datasets.<\/figcaption><\/figure>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1002645\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: AI-POWERED EXPERIENCE<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/aka.ms\/research-copilot\/?OCID=msr_researchforum_Copilot_MCR_Blog_Promo\" aria-label=\"Microsoft research copilot experience\" data-bi-cN=\"Microsoft research copilot experience\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/MSR-Chat-Promo.png\" alt=\"\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft research copilot experience<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-copilot-experience\" class=\"large\">Discover more about research at Microsoft through our AI-powered experience<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/aka.ms\/research-copilot\/?OCID=msr_researchforum_Copilot_MCR_Blog_Promo\" aria-describedby=\"microsoft-research-copilot-experience\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft research copilot experience\" target=\"_blank\">\n\t\t\t\t\t\t\tStart now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 class=\"wp-block-heading\" id=\"implications-and-looking-forward\">Implications and looking forward&nbsp;<\/h2>\n\n\n\n<p>LoftQ promises to advance the field of AI by accelerating research and facilitating the creation of cutting-edge tools while supporting sustainable development. While initially focused on LLMs, LoftQ\u2019s flexible design also supports fine-tuning in other types of models, such those for vision and speech technologies. As our research progresses, we expect to make further enhancements that will boost performance on downstream tasks. We hope these improvements will lead to broader adoption across various AI applications. We\u2019re excited about the breadth of this technology\u2019s applicability and encourage the AI community to explore its benefits. LoftQ is available as open source through the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/huggingface\/peft\/blob\/56773b9a92b141111d65fe3548d0c30233358868\/examples\/loftq_finetuning\/README.md\" target=\"_blank\" rel=\"noopener noreferrer\">Hugging Face PEFT library<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LoftQ boosts LLM efficiency by streamlining the fine-tuning process, reducing computational demands while preserving high performance. Innovations like this can help make AI technology more energy-efficient.<\/p>\n","protected":false},"author":42735,"featured_media":1027119,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1027098","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[1014303],"related-researchers":[{"type":"user_nicename","value":"Nikos Karampatziakis","user_id":33104,"display_name":"Nikos Karampatziakis","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikosk\/\" aria-label=\"Visit the profile page for Nikos Karampatziakis\">Nikos Karampatziakis<\/a>","is_active":false,"last_first":"Karampatziakis, Nikos","people_section":0,"alias":"nikosk"},{"type":"user_nicename","value":"Chen Liang","user_id":43239,"display_name":"Chen Liang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chenliang1\/\" aria-label=\"Visit the profile page for Chen Liang\">Chen Liang<\/a>","is_active":false,"last_first":"Liang, Chen","people_section":0,"alias":"chenliang1"},{"type":"user_nicename","value":"Weizhu Chen","user_id":34863,"display_name":"Weizhu Chen","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wzchen\/\" aria-label=\"Visit the profile page for Weizhu Chen\">Weizhu Chen<\/a>","is_active":false,"last_first":"Chen, Weizhu","people_section":0,"alias":"wzchen"},{"type":"guest","value":"yixiao-li","user_id":"1027107","display_name":"Yixiao Li","author_link":"<a href=\"https:\/\/yxli2123.github.io\/\" aria-label=\"Visit the profile page for Yixiao Li\">Yixiao Li<\/a>","is_active":true,"last_first":"Li, Yixiao","people_section":0,"alias":"yixiao-li"},{"type":"guest","value":"yifan-yu-2","user_id":"1027137","display_name":"Yifan Yu","author_link":"<a href=\"https:\/\/www.linkedin.com\/in\/yifan-yu-0495011b4\/\" aria-label=\"Visit the profile page for Yifan Yu\">Yifan Yu<\/a>","is_active":true,"last_first":"Yu, Yifan","people_section":0,"alias":"yifan-yu-2"},{"type":"guest","value":"tuo-zhao","user_id":"782632","display_name":"Tuo Zhao","author_link":"<a href=\"https:\/\/www2.isye.gatech.edu\/~tzhao80\/\" aria-label=\"Visit the profile page for Tuo Zhao\">Tuo Zhao<\/a>","is_active":true,"last_first":"Zhao, Tuo","people_section":0,"alias":"tuo-zhao"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-960x540.png\" class=\"img-object-cover\" alt=\"LoftQ paper at ICLR 2024\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/04\/LoftQ-BlogHeroFeature-1400x788-1.png 1401w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"May 7, 2024","formattedExcerpt":"LoftQ boosts LLM efficiency by streamlining the fine-tuning process, reducing computational demands while preserving high performance. Innovations like this can help make AI technology more energy-efficient.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1027098","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1027098"}],"version-history":[{"count":38,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1027098\/revisions"}],"predecessor-version":[{"id":1029315,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1027098\/revisions\/1029315"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1027119"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1027098"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1027098"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1027098"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1027098"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1027098"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1027098"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1027098"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1027098"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1027098"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1027098"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1027098"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}