{"id":997734,"date":"2024-01-10T08:30:00","date_gmt":"2024-01-10T16:30:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-focus-week-of-january-8-2024\/"},"modified":"2024-09-16T08:31:57","modified_gmt":"2024-09-16T15:31:57","slug":"research-focus-week-of-january-8-2024","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-focus-week-of-january-8-2024\/","title":{"rendered":"Research Focus: Week of January 8, 2024"},"content":{"rendered":"\n<figure class=\"wp-block-pullquote\"><blockquote><p><em class=\"\">Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/em><\/p><\/blockquote><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1.png\" alt=\"Research Focus - week of January 8, 2024\" class=\"wp-image-997752\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1.png 1400w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1280x720.png 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"mixture-of-linear-experts-for-long-term-time-series-forecasting\">Mixture-of-Linear-Experts for Long-term Time Series Forecasting&nbsp;<\/h2>\n\n\n\n<p><strong><em>This research was accepted by the 2024 International Conference on Artificial Intelligence and Statistics (AISTATS<\/em><\/strong>)<strong><em>.<\/em><\/strong><\/p>\n\n\n\n<p>Long-term time series forecasting (LTSF), which aims to predict future values of a series given past values, is an important problem in the machine learning community. It\u2019s useful in areas like weather modeling, traffic flow prediction, and financial forecasting.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The current state of the art on LTSF is attained in some cases by linear-centric models. However, real-world time series are usually nonstationary. For example, traffic patterns change on different days of the week. The inherent simplicity of linear-centric models makes them unable to capture these patterns. In a recent paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mixture-of-linear-experts-for-long-term-time-series-forecasting\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mixture-of-Linear-Experts for Long-term Time Series Forecasting<\/a>, researchers from Microsoft and external colleagues propose Mixture-of-Linear-Experts (MoLE) to address this problem. Instead of training a single model, MoLE trains multiple linear-centric models (i.e., experts) and a router model that weighs and mixes their outputs. While the entire framework is trained end-to-end, each expert learns to specialize in a specific temporal pattern, and the router model learns to compose the experts adaptively. Experiments show that MoLE significantly reduces forecasting error of linear-centric models, and MoLE outperforms state-of-the-art transformer-based approaches in 68% of settings.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--1\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mixture-of-linear-experts-for-long-term-time-series-forecasting\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"a-weakly-supervised-streaming-multilingual-speech-model-with-truly-zero-shot-capability\">A Weakly-Supervised Streaming Multilingual Speech Model with Truly Zero-Shot Capability<\/h2>\n\n\n\n<p>End-to-end (E2E) models are the dominant model structure in automatic speech recognition (ASR) and speech translation (ST). This has led to efforts to develop a unified E2E model for multilingual ASR and multilingual ST tasks. Streaming ASR and ST tasks have extensively utilized neural transducers in the past.&nbsp;&nbsp;<\/p>\n\n\n\n<p>In a recent paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-weakly-supervised-streaming-multilingual-speech-model-with-truly-zero-shot-capability\/\" target=\"_blank\" rel=\"noreferrer noopener\">A Weakly-Supervised Streaming Multilingual Speech Model with Truly Zero-Shot Capability<\/a>, researchers from Microsoft present a streaming multilingual speech model \u2013 $SM^2$ \u2013 which employs a single neural transducer model for transcribing or translating multiple languages into target languages. $SM^2$ is trained using weakly supervised data created by converting speech recognition transcriptions with a machine translation model. Leveraging 351,000 hours of speech training data from 25 languages, $SM^2$ achieves impressive ST performance. Notably, no human-labeled ST data was employed during training. It was purely weakly supervised ST data generated by converting 351,000 hours of anonymized ASR data from 25 languages using text based machine translation service.&nbsp;<\/p>\n\n\n\n<p>The researchers also demonstrate the truly zero-shot capability of $SM^2$ when expanding to new target languages, generating high-quality zero-shot ST translation for \\{source-speech, target-text\\} pairs that were not seen during training.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--2\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-weakly-supervised-streaming-multilingual-speech-model-with-truly-zero-shot-capability\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1144028\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">PODCAST SERIES<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-label=\"The AI Revolution in Medicine, Revisited\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/06\/Episode7-PeterBillSebastien-AIRevolution_Hero_Feature_River_No_Text_1400x788.jpg\" alt=\"Illustrated headshot of Bill Gates, Peter Lee, and S\u00e9bastien Bubeck\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">The AI Revolution in Medicine, Revisited<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"the-ai-revolution-in-medicine-revisited\" class=\"large\">Join Microsoft\u2019s Peter Lee on a journey to discover how AI is impacting healthcare and what it means for the future of medicine.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-describedby=\"the-ai-revolution-in-medicine-revisited\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"kbformer-a-diffusion-model-for-structured-entity-completion\">KBFormer: A Diffusion Model for Structured Entity Completion<\/h2>\n\n\n\n<p>Deep generative models include large language models (LLMs) for text, plus models for other modalities, such as for vision and audio. In a recent paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/kbformer-a-diffusion-model-for-structured-entity-completion\/\" target=\"_blank\" rel=\"noreferrer noopener\">KBFormer: A Diffusion Model for Structured Entity Completion<\/a>, researchers from Microsoft and external colleagues explore generative modeling of structured entities with heterogeneous properties, such as numerical, categorical, string, and composite. This includes entries in rich knowledge bases (KBs), items in product catalogs or scientific catalogs, and ontologies like the periodic table of elements and the various properties of isotopes.&nbsp;<\/p>\n\n\n\n<p>Their approach handles such heterogeneous data through a mixed continuous-discrete diffusion process over the properties, using a flexible framework that can model entities with arbitrary hierarchical properties. Using this approach, the researchers obtain state-of-the-art performance on a majority of cases across 15 datasets. In addition, experiments with a device KB and a nuclear physics dataset demonstrate the model\u2019s ability to learn representations useful for entity completion in diverse settings. This has many downstream use cases, including modeling numerical properties with high accuracy \u2013 critical for science applications, which also benefit from the model\u2019s inherent probabilistic nature.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--3\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/kbformer-a-diffusion-model-for-structured-entity-completion\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"a-framework-for-exploring-the-consequences-of-ai-mediated-enterprise-knowledge-access-and-identifying-risks-to-workers\">A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers<\/h2>\n\n\n\n<p>People are increasingly interacting with, and being affected by, the deployment of AI systems in the workplace. This is a pressing matter for system designers, policy-makers, and workers themselves, which researchers from Microsoft address in a recent paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-framework-for-exploring-the-consequences-of-ai-mediated-enterprise-knowledge-access-and-identifying-risks-to-workers\/\" target=\"_blank\" rel=\"noreferrer noopener\">A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers<\/a>.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Organizations generate huge amounts of information that raise challenges associated with the maintenance, dissemination, and discovery of organizational knowledge. Recent developments in AI, notably large language models (LLMs), present a shift in what is possible in this domain. Recent advances could enable more extensive mining, knowledge synthesis, and natural language interaction in relation to knowledge.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The researchers propose the Consequence-Mechanism-Risk Framework to identify risks to workers associated with deploying AI-mediated enterprise knowledge access systems. The goal is to support those involved in the design and\/or deployment of such systems to identify the risks they introduce, the specific system mechanisms that introduce those risks, and the actionable levers to reduce those risks.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--4\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-framework-for-exploring-the-consequences-of-ai-mediated-enterprise-knowledge-access-and-identifying-risks-to-workers\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"large-search-model-redefining-search-stack-in-the-era-of-llms\">Large Search Model: Redefining Search Stack in the Era of LLMs<\/h2>\n\n\n\n<p>Modern search engines are built on a stack of different components, including query understanding, retrieval, multi-stage ranking, and question answering, among others. These components are often optimized and deployed independently. In a recent paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/large-search-model-redefining-search-stack-in-the-era-of-llms\/\" target=\"_blank\" rel=\"noreferrer noopener\">Large Search Model: Redefining Search Stack in the Era of LLMs<\/a>, researchers from Microsoft introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM). All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts. This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simplifying the cumbersome search stack. To substantiate the feasibility of this framework, the researchers present a series of proof-of-concept experiments and discuss the potential challenges associated with implementing this approach within real-world search systems.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--5\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/large-search-model-redefining-search-stack-in-the-era-of-llms\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Mixture-of-linear-experts for long-term time series forecasting; Weakly-supervised streaming multilingual speech model with truly zero-shot capability; KBFormer: Diffusion model for structured entity completion; Identifying risks of AI-mediated data access: <\/p>\n","protected":false},"author":42735,"featured_media":997752,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Zinan Lin","user_id":"42327"},{"type":"user_nicename","value":"Jinyu Li","user_id":"32312"},{"type":"user_nicename","value":"Bhaskar Mitra","user_id":"31257"},{"type":"user_nicename","value":"Si\u00e2n Lindley","user_id":"33651"},{"type":"user_nicename","value":"Liang Wang","user_id":"41443"},{"type":"user_nicename","value":"Nan Yang","user_id":"33054"},{"type":"user_nicename","value":"Furu Wei","user_id":"31830"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556,198583,13545,13554,13555,13559],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[261670],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-997734","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-ecology-environment","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-research-area-search-information-retrieval","msr-research-area-social-sciences","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199560,199561,199565,437514],"msr_impact_theme":["Resilience"],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144735,437022],"related-projects":[717493,580699],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Zinan Lin","user_id":42327,"display_name":"Zinan Lin","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zinanlin\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Zinan Lin\">Zinan Lin<\/a>","is_active":false,"last_first":"Lin, Zinan","people_section":0,"alias":"zinanlin"},{"type":"user_nicename","value":"Jinyu Li","user_id":32312,"display_name":"Jinyu Li","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Jinyu Li\">Jinyu Li<\/a>","is_active":false,"last_first":"Li, Jinyu","people_section":0,"alias":"jinyli"},{"type":"user_nicename","value":"Si\u00e2n Lindley","user_id":33651,"display_name":"Si\u00e2n Lindley","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sianl\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Si\u00e2n Lindley\">Si\u00e2n Lindley<\/a>","is_active":false,"last_first":"Lindley, Si\u00e2n","people_section":0,"alias":"sianl"},{"type":"user_nicename","value":"Liang Wang","user_id":41443,"display_name":"Liang Wang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wangliang\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Liang Wang\">Liang Wang<\/a>","is_active":false,"last_first":"Wang, Liang","people_section":0,"alias":"wangliang"},{"type":"user_nicename","value":"Nan Yang","user_id":33054,"display_name":"Nan Yang","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanya\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Nan Yang\">Nan Yang<\/a>","is_active":false,"last_first":"Yang, Nan","people_section":0,"alias":"nanya"},{"type":"user_nicename","value":"Furu Wei","user_id":31830,"display_name":"Furu Wei","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fuwei\/?lang=fr-ca\" aria-label=\"Visitez la page de profil pour Furu Wei\">Furu Wei<\/a>","is_active":false,"last_first":"Wei, Furu","people_section":0,"alias":"fuwei"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-960x540.png\" class=\"img-object-cover\" alt=\"Research Focus - week of January 8, 2024\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/01\/RF32-BlogHeroFeature-1400x788-1.png 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"January 10, 2024","formattedExcerpt":"Mixture-of-linear-experts for long-term time series forecasting; Weakly-supervised streaming multilingual speech model with truly zero-shot capability; KBFormer: Diffusion model for structured entity completion; Identifying risks of AI-mediated data access:","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/997734","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=997734"}],"version-history":[{"count":13,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/997734\/revisions"}],"predecessor-version":[{"id":1084839,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/997734\/revisions\/1084839"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/997752"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=997734"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=997734"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=997734"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=997734"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=997734"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=997734"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=997734"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=997734"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=997734"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=997734"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=997734"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}