{"id":827599,"date":"2022-03-22T10:00:00","date_gmt":"2022-03-22T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=827599"},"modified":"2022-08-17T09:51:47","modified_gmt":"2022-08-17T16:51:47","slug":"microsoft-translator-enhanced-with-z-code-mixture-of-experts-models","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-translator-enhanced-with-z-code-mixture-of-experts-models\/","title":{"rendered":"Microsoft Translator enhanced with Z-code Mixture of Experts models"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"577\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1024x577.jpg\" alt=\"Z-code multilingual model representation diagram\" class=\"wp-image-827977\"\/><\/figure>\n\n\n\n<p>Translator, a Microsoft Azure Cognitive Service, is adopting Z-code <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/deepspeed-powers-8x-larger-moe-model-training-with-high-performance\/\">Mixture of Experts models<\/a>, a breakthrough AI technology that significantly improves the quality of production translation models. As a component of Microsoft\u2019s larger <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/a-holistic-representation-toward-integrative-ai\/\">XYZ-code initiative<\/a> to combine AI models for text, vision, audio, and language, Z-code supports the creation of AI systems that can speak, see, hear, and understand. This effort is a part of <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/\" target=\"_blank\" rel=\"noopener noreferrer\">Azure AI<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/turing.microsoft.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Turing<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, focusing on building multilingual, large-scale language models that support various production teams. Translator is using NVIDIA GPUs and Triton Inference Server to deploy and scale these models efficiently for high-performance inference. Translator is the first machine translation provider to introduce this technology live for customers.<\/p>\n\n\n\n<h2 id=\"z-code-moe-boosts-efficiency-and-quality\">Z-code MoE boosts efficiency and quality<\/h2>\n\n\n\n<p>Z-code models utilize a new architecture called Mixture of Experts (MoE), where different parts of the models can learn different tasks. The models learn to translate between multiple languages at the same time. The Z-code MoE model utilizes more parameters while dynamically selecting which parameters to use for a given input. This enables the model to specialize a subset of the parameters (experts) during training. At runtime, the model uses the relevant experts for the task, which is more computationally efficient than utilizing all model\u2019s parameters.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Zcode_MoE_v2-final_GIF.gif\" alt=\"animated graphic showing Z-code MoE model translating from English to French\" class=\"wp-image-828277\"\/><figcaption>Figure 1: Z-code MoE model translating from English to French. The model dynamically selects subsets of its parameters to be utilized for each input. <\/figcaption><\/figure>\n\n\n\n<p>Newly introduced Z-code MoE models leverage transfer learning, which enables efficient knowledge sharing across similar languages. Moreover, the models utilize both parallel and monolingual data during the training process. This opens the way to high quality machine translation beyond the high-resource languages and improves the quality of low-resource languages that lack significant training data. This approach can provide a positive impact on AI fairness, since both high-resource and low-resource languages see improvements.<\/p>\n\n\n\n<p>We have trained <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/scalable-and-efficient-moe-training-for-multitask-multilingual-models\/\">translation systems for research<\/a> purposes with 200 billion parameters supporting 100 language pairs. Though such large systems significantly improved the translation quality, this also introduced challenges to deploy them in a production environment cost effectively. For our production model deployment, we opted for training a set of 5 billion parameter models, which are 80 times larger than our currently deployed models. We trained a multilingual model per set of languages, where each model can serve up to 20 language pairs and therefore replace up to 20 of the current systems. This enabled our model to maximize the transfer learning among languages while being deployable with effective runtime cost. We compared the quality improvements of the new MoE to the current production system using human evaluation. The figure below shows the results of the models on various language pairs. The Z-code-MoE systems outperformed individual bilingual systems, with average improvements of 4%. For instance, the models improved English to French translations by 3.2 percent, English to Turkish by 5.8 percent, Japanese to English by 7.6 percent, English to Arabic by 9.3 percent, and English to Slovenian by 15 percent.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"575\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1024x575.jpg\" alt=\"graphic showing quality gains of Z-code MoE models over existing models. Languages are ordered by training data sizes.\" class=\"wp-image-827971\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1024x575.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-768x431.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1536x862.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-2048x1150.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-scaled-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-scaled-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/Revised-MS-Translator-graphic_without-logo-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Figure 2: Quality gains of Z-code MoE models over existing models. Languages are ordered by training data sizes. <\/figcaption><\/figure>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/scalable-and-efficient-moe-training-for-multitask-multilingual-models\/\" data-bi-cN=\"Scalable and Efficient MoE Training for Multitask Multilingual Models\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Scalable and Efficient MoE Training for Multitask Multilingual Models<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Training large models with billions of parameters is challenging. The Translator team collaborated with Microsoft DeepSpeed to develop a high-performance system that helped train massive scale Z-code MoE models, enabling us to efficiently scale and deploy Z-code models for translation.<\/p>\n\n\n\n<p>We partnered with NVIDIA to optimize faster engines that can be used at runtime to deploy the new Z-code\/MoE models on GPUs. NVIDIA developed custom CUDA kernels and leveraged the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/developer.nvidia.com\/blog\/implementing-high-performance-matrix-multiplication-using-cutlass-v2-8\/\" target=\"_blank\" rel=\"noopener noreferrer\">CUTLASS<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/NVIDIA\/FasterTransformer\" target=\"_blank\" rel=\"noopener noreferrer\">FasterTransformer<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> libraries to efficiently implement MoE layers on a single V100 GPU. This implementation achieved up to 27x throughput improvements over standard GPU (PyTorch) runtimes. We used NVIDIA\u2019s open source <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/developer.nvidia.com\/blog\/fast-and-scalable-ai-model-deployment-with-nvidia-triton-inference-server\/\" target=\"_blank\" rel=\"noopener noreferrer\">Triton Inference Server<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to serve Z-code MoE models. We used Triton\u2019s dynamic batching feature to pool several requests into a big batch for higher throughput that enabled us to ship large models with relatively low runtime costs.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1144027\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">PODCAST SERIES<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/ai-testing-and-evaluation-learnings-from-science-and-industry\/\" aria-label=\"AI Testing and Evaluation: Learnings from Science and Industry\" data-bi-cN=\"AI Testing and Evaluation: Learnings from Science and Industry\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/06\/EP2-AI-TE_Hero_Feature_River_No_Text_1400x788.jpg\" alt=\"Illustrated headshots of Daniel Carpenter, Timo Minssen, Chad Atalla, and Kathleen Sullivan for the Microsoft Research Podcast\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">AI Testing and Evaluation: Learnings from Science and Industry<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"ai-testing-and-evaluation-learnings-from-science-and-industry\" class=\"large\">Discover how Microsoft is learning from other domains to advance evaluation and testing as a pillar of AI governance.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/ai-testing-and-evaluation-learnings-from-science-and-industry\/\" aria-describedby=\"ai-testing-and-evaluation-learnings-from-science-and-industry\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"AI Testing and Evaluation: Learnings from Science and Industry\" target=\"_blank\">\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 id=\"how-can-you-use-the-new-z-code-models\">How can you use the new Z-code models?<\/h2>\n\n\n\n<p>Z-code models are available now by invitation to customers using <a href=\"https:\/\/www.microsoft.com\/translator\/business\/document-translation\/\" target=\"_blank\" rel=\"noreferrer noopener\">Document Translation<\/a>, a feature that translates entire documents, or volumes of documents, in a variety of different file formats preserving their original formatting. Z-code models will be made available to all customers and to other Translator products in phases. Please <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/DocumentTranslationZcode\" target=\"_blank\" rel=\"noopener noreferrer\">fill out this form<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to request access to Document Translation using Z-code models.<\/p>\n\n\n\n<h4 id=\"learn-more\">Learn more<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/AAg7o2c\" target=\"_blank\" rel=\"noopener noreferrer\">New Z-code Mixture of Experts models improve quality, efficiency in Translator and Azure AI<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li><li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/?OCID=AID2200277_SEM_d5d7a73d230612ae4451c512f4af9788:G:s&ef_id=d5d7a73d230612ae4451c512f4af9788:G:s&msclkid=d5d7a73d230612ae4451c512f4af9788\" target=\"_blank\" rel=\"noopener noreferrer\">Cognitive Services\u2014APIs for AI Solutions | Microsoft Azure<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li><li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/scalable-and-efficient-moe-training-for-multitask-multilingual-models\/\">Scalable and Efficient MoE Training for Multitask Multilingual Models<\/a><\/li><li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/developer.nvidia.com\/blog\/fast-and-scalable-ai-model-deployment-with-nvidia-triton-inference-server\/\" target=\"_blank\" rel=\"noopener noreferrer\">Fast and Scalable AI Model Deployment with NVIDIA Triton Inference Server | NVIDIA Technical Blog<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li><\/ul>\n\n\n\n<h4 id=\"acknowledgements\">Acknowledgements<\/h4>\n\n\n\n<p>The following people contributed to this work: Abdelrahman Abouelenin, Ahmed Salah, Akiko Eriguchi, Alex Cheng, Alex Muzio, Amr Hendy, Arul Menezes, Brad Ballinger, Christophe Poulain, Evram Narouz, Fai Sigalov, Hany Hassan Awadalla, Hitokazu Matsushita, Mohamed Afify, Raffy Bekhit, Rohit Jain, Steven Nguyen, Vikas Raunak, Vishal Chowdhary, and Young Jin Kim.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Translator, a Microsoft Azure Cognitive Service, is adopting Z-code Mixture of Experts models, a breakthrough AI technology that significantly improves the quality of production translation models. As a component of Microsoft\u2019s larger XYZ-code initiative to combine AI models for text, vision, audio, and language, Z-code supports the creation of AI systems that can speak, see, [&hellip;]<\/p>\n","protected":false},"author":40306,"featured_media":827977,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Hany Hassan Awadalla","user_id":"31965"},{"type":"user_nicename","value":"Krishna Doss Mohan","user_id":"41521"},{"type":"user_nicename","value":"Vishal Chowdhary","user_id":"34599"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-827599","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[765364],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Krishna Doss Mohan","user_id":41521,"display_name":"Krishna Doss Mohan","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kdoss\/\" aria-label=\"Visit the profile page for Krishna Doss Mohan\">Krishna Doss Mohan<\/a>","is_active":false,"last_first":"Mohan, Krishna Doss","people_section":0,"alias":"kdoss"},{"type":"user_nicename","value":"Vishal Chowdhary","user_id":34599,"display_name":"Vishal Chowdhary","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vishalc\/\" aria-label=\"Visit the profile page for Vishal Chowdhary\">Vishal Chowdhary<\/a>","is_active":false,"last_first":"Chowdhary, Vishal","people_section":0,"alias":"vishalc"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-960x540.jpg\" class=\"img-object-cover\" alt=\"Z-code multilingual model representation diagram\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1024x577.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Ai_translator_Hero_still-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"Hany Hassan Awadalla, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kdoss\/\" title=\"Go to researcher profile for Krishna Doss Mohan\" aria-label=\"Go to researcher profile for Krishna Doss Mohan\" data-bi-type=\"byline author\" data-bi-cN=\"Krishna Doss Mohan\">Krishna Doss Mohan<\/a>, and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vishalc\/\" title=\"Go to researcher profile for Vishal Chowdhary\" aria-label=\"Go to researcher profile for Vishal Chowdhary\" data-bi-type=\"byline author\" data-bi-cN=\"Vishal Chowdhary\">Vishal Chowdhary<\/a>","formattedDate":"March 22, 2022","formattedExcerpt":"Translator, a Microsoft Azure Cognitive Service, is adopting Z-code Mixture of Experts models, a breakthrough AI technology that significantly improves the quality of production translation models. As a component of Microsoft\u2019s larger XYZ-code initiative to combine AI models for text, vision, audio, and language, Z-code&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/827599","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/40306"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=827599"}],"version-history":[{"count":9,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/827599\/revisions"}],"predecessor-version":[{"id":870624,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/827599\/revisions\/870624"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/827977"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=827599"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=827599"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=827599"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=827599"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=827599"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=827599"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=827599"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=827599"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=827599"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=827599"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=827599"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}