{"id":707419,"date":"2020-11-23T05:00:29","date_gmt":"2020-11-23T13:00:29","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=707419"},"modified":"2021-11-21T08:22:01","modified_gmt":"2021-11-21T16:22:01","slug":"text-to-speech","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/text-to-speech\/","title":{"rendered":"Text to Speech"},"content":{"rendered":"<p>We are working on neural network based text to speech (TTS). including acoustic model, vocoder, frontend, and end-to-end text-to-wave model. Our research works have been transferred in Microsoft Azure TTS service to improve the product experiences.<\/p>\n<p><strong>Product Transfer <\/strong>(Azure TTS page: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/text-to-speech\/\">https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/text-to-speech\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/p>\n<ul>\n<li>Our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1905.09263.pdf\">FastSpeech<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0has supported more than 70 languages in Microsoft Azure Text to Speech Service!\u00a0 [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/techcommunity.microsoft.com\/t5\/azure-ai\/neural-text-to-speech-extends-support-to-15-more-languages-with\/ba-p\/1505911\">News-1<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/4hgMmj9Qmwsxs4w7vYGvCQ\">News-2<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Our\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2008.03687.pdf\">LRSpeech<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0helps Azure TTS to extend 5 new low-resource languages! [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/techcommunity.microsoft.com\/t5\/azure-ai\/neural-text-to-speech-previews-five-new-languages-with\/ba-p\/1907604\">News<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Our\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2103.00993.pdf\">AdaSpeech<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0has been deployed in Microsoft Azure TTS to support custom voice.<\/li>\n<\/ul>\n<p><strong>Paper Publication <\/strong>(Speech demo page: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/speechresearch.github.io\/\">https:\/\/speechresearch.github.io\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/p>\n<ul>\n<li>Jiawei Chen, Xu Tan, Yichong Leng, Jin Xu, Guihua Wen, Tao Qin, Tie-Yan Liu, Speech-T: Transducer for Text to Speech and Beyond, NeurIPS, 2021.<\/li>\n<li class=\"\">Xu Tan, Tao Qin, Frank Soong, Tie-Yan Liu,\u00a0<em>A Survey on Neural Speech Synthesis<\/em>, arXiv 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.15561.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/blog.csdn.net\/weixin_42721167\/article\/details\/118684294\">Article-1<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s?__biz=MzU2OTA0NzE2NA==&mid=2247561935&idx=1&sn=6452e1be58b5d9d05f5794e54c05aaee&chksm=fc8713dccbf09aca60d242b45a07ffea57209c9b9a36edac487d478d127ef24abd141dcce01d&mpshare=1&scene=1&srcid=0725TAu10EeXIl3vyv6ad8Wg&sharer_sharetime=1627186618826&sharer_shareid=6a2d186878c68bdaa3ef07463a6096bd#rd\">Article-2<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/tts-tutorial\/survey\">Github<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu,\u00a0<em>PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Driven Adaptive Prior<\/em>, arXiv 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.06406.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li class=\"\">Yuzi Yan, Xu Tan, Bohan Li, Guangyan Zhang, Tao Qin, Sheng Zhao, Yuan Shen, Wei-Qiang Zhang, Tie-Yan Liu,\u00a0<em>AdaSpeech 3: Adaptive Text to Speech for Spontaneous Style<\/em>,\u00a0<strong>INTERSPEECH<\/strong>\u00a02021.<\/li>\n<li>Yuzi Yan, Xu Tan, Bohan Li, Tao Qin, Sheng Zhao, Yuan Shen, Tie-Yan Liu, <em>AdaSpeech 2: Adaptive Text to Speech with Untranscribed Data<\/em>. <strong>ICASSP<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2104.09715.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Jinzhu Li, Sheng Zhao, Enhong Chen, Tie-Yan Liu, <em>LightSpeech: Lightweight and Fast Text to Speech with Neural Architecture Search<\/em>. <strong>ICASSP<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2102.04040.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Chen Zhang, Yi Ren, Xu Tan, Jinglin Liu, Kejun Zhang, Tao Qin, Sheng Zhao, Tie-Yan Liu, <em>DenoiSpeech: Denoising Text to Speech with Frame-Level Noise Modeling<\/em>. <strong>ICASSP<\/strong> 2021.\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2012.09547.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yichong Leng, Xu Tan, Sheng Zhao, Frank Soong, Xiang-Yang Li, Tao Qin, <em>MBNet: MOS Prediction for Synthesized Speech with Mean-Bias Network,<\/em>\u00a0<strong>ICASSP<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2103.00110.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Mingjian Chen, Xu Tan, Bohan Li, Yanqing Liu, Tao Qin, Sheng Zhao, Tie-Yan Liu, <em>AdaSpeech: Adaptive Text to Speech for Custom Voice<\/em>, <strong>ICLR<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2103.00993.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu, <em>FastSpeech 2: Fast and High-Quality End-to-End Text to Speech<\/em>, <strong>ICLR<\/strong> 2021.\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2006.04558.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s?__biz=MzAwMTA3MzM4Nw==&mid=2649453247&idx=1&sn=8e9ea126795737af923630c725c6ac72&chksm=82c09b3bb5b7122d1f0ddb7bca9dfb217c28c9b303ecbd66442f9d145ab69b9ad2d7ad65f345&scene=21#wechat_redirect\">Blog<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Jiawei Chen, Xu Tan, Jian Luan, Tao Qin, Tie-Yan Liu, HiFiSinger: Towards High-Fidelity Neural Singing Voice Synthesis, arXiv 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2009.01776.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Peiling Lu, Jie Wu, Jian Luan, Xu Tan, Li Zhou, <em>XiaoiceSing: A High-Quality and Integrated Singing Voice Synthesis System<\/em>, <strong>INTERSPEECH<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2006.06261.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Mingjian Chen, Xu Tan, Yi Ren, Jin Xu, Hao Sun, Sheng Zhao, Tao Qin, Tie-Yan Liu, <em>MultiSpeech: Multi-Speaker Text to Speech with Transformer<\/em>, <strong>INTERSPEECH<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2006.04664.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yi Ren, Xu Tan, Tao Qin, Jian Luan, Zhou Zhao, Tie-Yan Liu, <em>DeepSinger: Singing Voice Synthesis with Data Mined From the Web<\/em>,\u00a0<strong>KDD<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2007.04590.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Jin Xu, Xu Tan, Yi Ren, Tao Qin, Jian Li, Sheng Zhao, Tie-Yan Liu, <em>LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition<\/em>,\u00a0<strong>KDD <\/strong>2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2008.03687.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s?__biz=MzAwMTA3MzM4Nw==&mid=2649455526&idx=1&sn=647f223b1b8061639e87614313648d23&scene=19#wechat_redirect\">Blog<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Tomoki Hayashi, Ryuichi Yamamoto, Katsuki Inoue, Takenori Yoshimura, Shinji Watanabe, Tomoki Toda, Kazuya Takeda, Yu Zhang, Xu Tan, <em>ESPnet-TTS: Unified, Reproducible, and Integratable Open Source End-to-End Text-to-Speech Toolkit,<\/em>\u00a0<strong>ICASSP<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1910.10909.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu, <em>FastSpeech: Fast, Robust and Controllable Text to Speech,<\/em>\u00a0<strong>NeurIPS<\/strong> 2019. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1905.09263.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/speechresearch.github.io\/fastspeech\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/aHupAjPNFdUdaG9Uof_obQ\">Article<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.reddit.com\/r\/MachineLearning\/comments\/brzwi5\/r_fastspeech_fast_robust_and_controllable_text_to\/\">Reddit<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Hao Sun, Xu Tan, Jun-Wei Gan, Sheng Zhao, Dongxu Han, Hongzhi Liu, Tao Qin, and Tie-Yan Liu, <em>Knowledge Distillation from BERT in Pre-training and Fine-tuning for Polyphone Disambiguation,<\/em>\u00a0<strong>ASRU<\/strong> 2019. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9003918\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, Tie-Yan Liu, <em>Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion<\/em>, <strong>INTERSPEECH<\/strong> 2019. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1904.03446.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Yi Ren, Xu Tan, Tao Qin, Zhou Zhao, Sheng Zhao, Tie-Yan Liu, <em>Almost Unsupervised Text to Speech and Automatic Speech Recognition<\/em>, <strong>ICML<\/strong> 2019.\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1905.06791.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/speechresearch.github.io\/unsuper\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/8fMHr0sReeA8pPZ4WAdkFw\">Article<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/news.developer.nvidia.com\/microsoft-enhances-sra-tts-algorithms\/\">Blog<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/icml.cc\/media\/Slides\/icml\/2019\/201(13-09-00)-13-10-05-4923-almost_unsuperv.pdf\">Slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=UXpHzPrDJ2w\">Video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>We are working on neural network based text to speech (TTS). including acoustic model, vocoder, frontend, and end-to-end text-to-wave model. Our research works have been transferred in Microsoft Azure TTS service to improve the product experiences. Product Transfer (Azure TTS page: https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/text-to-speech\/) Our FastSpeech\u00a0has supported more than 70 languages in Microsoft Azure Text to Speech [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,243062,13545],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-707419","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-audio-acoustics","msr-research-area-human-language-technologies","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2018-11-01","related-publications":[699730],"related-downloads":[],"related-videos":[749392],"related-groups":[705946],"related-events":[744238],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707419","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":12,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707419\/revisions"}],"predecessor-version":[{"id":798583,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707419\/revisions\/798583"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=707419"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=707419"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=707419"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=707419"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=707419"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}