{"id":681651,"date":"2020-08-12T09:00:21","date_gmt":"2020-08-12T16:00:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=681651"},"modified":"2022-11-01T08:58:01","modified_gmt":"2022-11-01T15:58:01","slug":"research-collection-the-unseen-history-of-audio-and-acoustics-research-at-microsoft","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-collection-the-unseen-history-of-audio-and-acoustics-research-at-microsoft\/","title":{"rendered":"Research Collection: The Unseen History of Audio and Acoustics Research at Microsoft"},"content":{"rendered":"\n<h2 id=\"audio-and-acoustics-research-at-microsoft\">Audio and Acoustics Research at Microsoft<\/h2>\n\n\n\n<p>Getting the sound right is a crucial ingredient in natural user interfaces, immersive gaming, realistic virtual and mixed reality, and ubiquitous computing. Audio also plays an important role in assistive technologies for people who are blind or have low vision, and speech recognition and processing can help support those who are deaf or hard of hearing. Although computers have been capable of playing and processing high-fidelity audio for many decades, there are many frontiers left to explore in computational recognition, analysis and rendering of sound for speech or immersive sound fields.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1024x576.jpg\" alt=\"audio and acoustics: woman and man setting up a dummy in anachoic chamber\" class=\"wp-image-682323\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Audio has been a key research area since Microsoft Research was founded in 1991 \u2013 in its first year, researchers used audio data as well as other cues to explore automatic summarization of audiovisual presentations. Over the years, there have been steady and significant research advances in speech recognition, natural user interfaces, audio as a tool for collaboration and productivity, capturing and reproducing sound, spatial audio, acoustic simulation and audio analytics.<\/p>\n\n\n\n<p>Many of these advances have shipped in Microsoft products and services like Windows 10, Kinect, HoloLens and Teams, as well as Ford\u2019s SYNC in-car infotainment system, Polycom\u2019s videoconferencing devices, and major game titles such as Gears of War, Sea of Thieves and Borderlands 3. Still more are working their way into future products and services, and into the hands of developers.<\/p>\n\n\n\n<p><strong>Use the timelines below to explore several threads of audio and acoustics research as they evolved from theories and experiments to real-world applications.<\/strong><\/p>\n\n\n<aside id=accordion-1a271182-285c-4b92-8264-c544906845e9 class=\"msr-table-of-contents-block accordion mb-5 pb-0\" data-bi-aN=\"table-of-contents\">\n\t<button class=\"btn btn-collapse bg-gray-100 mb-0 display-flex justify-content-between\" type=\"button\" data-mount=\"collapse\" data-target=\"#accordion-collapse-1a271182-285c-4b92-8264-c544906845e9\" aria-expanded=\"true\" aria-controls=\"accordion-collapse-1a271182-285c-4b92-8264-c544906845e9\">\n\t\t<span class=\"msr-table-of-contents-block__label subtitle\">In this article<\/span>\n\t\t<span class=\"msr-table-of-contents-block__current mr-4 text-gray-600 font-weight-normal\" aria-hidden=\"true\"><\/span>\n\t<\/button>\n\t<div id=\"accordion-collapse-1a271182-285c-4b92-8264-c544906845e9\" class=\"msr-table-of-contents-block__collapse-wrapper collapse show\" data-parent=\"#accordion-1a271182-285c-4b92-8264-c544906845e9\">\n\t\t<div class=\"accordion-body bg-gray-100 border-top pt-4\">\n\t\t\t<ol class=\"msr-table-of-contents-block__list\">\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#audio-and-acoustics-research-at-microsoft\" class=\"msr-table-of-contents-block__list-item-link\">Audio and Acoustics Research at Microsoft<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#speech-recognition-and-natural-user-interfaces\" class=\"msr-table-of-contents-block__list-item-link\">Speech recognition and natural user interfaces<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#audio-for-collaboration-and-productivity\" class=\"msr-table-of-contents-block__list-item-link\">Audio for collaboration and productivity<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#capturing-and-reproducing-sound\" class=\"msr-table-of-contents-block__list-item-link\">Capturing and reproducing sound<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#progress-in-microphone-arrays\" class=\"msr-table-of-contents-block__list-item-link\">Progress in microphone arrays<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#spatial-audio\" class=\"msr-table-of-contents-block__list-item-link\">Spatial audio<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#acoustic-simulation\" class=\"msr-table-of-contents-block__list-item-link\">Acoustic simulation<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t<li class=\"msr-table-of-contents-block__list-item\">\n\t\t\t\t\t\t<a href=\"#audio-analytics\" class=\"msr-table-of-contents-block__list-item-link\">Audio analytics<\/a>\n\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t<\/ul>\n\t\t<\/div>\n\t<\/div>\n\t<span class=\"msr-table-of-contents-block__progress-bar\"><\/span>\n<\/aside>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-green-color has-css-opacity has-green-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 id=\"speech-recognition-and-natural-user-interfaces\" class=\"alignwide has-text-align-wide\">Speech recognition and natural user interfaces<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2002\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-researchers-establish-the-sound-capture-and-speech-enhancement-project\" class=\"moment__title\">Microsoft researchers establish the Sound Capture and Speech Enhancement project<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/sound-capture-speech-enhancement\/\">Sound Capture and Speech Enhancement<\/a> project begins to explore areas such as acoustic echo reduction, microphone array processing and noise reduction.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gain-self-calibration-procedure-for-microphone-arrays-2\/\" data-bi-cN=\"Gain Self-Calibration Procedure for Microphone Arrays\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Gain Self-Calibration Procedure for Microphone Arrays<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This paper introduces one of the technologies that made microphone arrays feasible for manufacturing.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-new-beamformer-design-algorithm-for-microphone-arrays\/\" data-bi-cN=\"A New Beamformer Design Algorithm for Microphone Arrays\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A New Beamformer Design Algorithm for Microphone Arrays<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reverberation-reduction-for-better-speech-recognition\/\" data-bi-cN=\"Reverberation Reduction for Better Speech Recognition\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Reverberation Reduction for Better Speech Recognition<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/microphone-array-post-processor-using-instantaneous-direction-of-arrival\/\" data-bi-cN=\"Microphone Array Post-Processor Using Instantaneous Direction of Arrival\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Microphone Array Post-Processor Using Instantaneous Direction of Arrival<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2007\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"ford-releases-sync\" class=\"moment__title\">Ford releases SYNC<\/h3>\n\n\n\n<p>Ford releases the first version of its SYNC in-car infotainment system, with a speech enhancement audio pipeline first designed by Microsoft researchers.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-1024x576.jpg\" alt=\"audio and acoustics: man standing in front of Experience Ford SYNC kiosk\" class=\"wp-image-682314\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2007-Ford-SYNC_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Video<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/natural-language-moves-in-car-infotainment-forward\/\" data-bi-cN=\"Natural Language Moves In-Car Infotainment Forward\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Natural Language Moves In-Car Infotainment Forward<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">February 2009<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/unified-framework-for-single-channel-speech-enhancement\/\" data-bi-cN=\"Unified Framework for Single Channel Speech Enhancement\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Unified Framework for Single Channel Speech Enhancement<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This paper introduced the parameter optimization approach used in Ford SYNC\u2019s speech enhancement pipeline (August 2009).<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2007\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"windows-support-for-microphone-arrays\" class=\"moment__title\">Windows support for microphone arrays<\/h3>\n\n\n\n<p>Microsoft releases Windows Vista, including support for four preselected microphone array geometries and standardized support for USB microphone arrays. Later, Windows 10 is updated to include support for microphone arrays with arbitrary geometry.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/sound-capture-and-processing-practical-approaches\/\" data-bi-cN=\"Sound Capture and Processing: Practical Approaches\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Sound Capture and Processing: Practical Approaches<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This book includes the introduction of multichannel acoustic echo cancellation, which later ships as part of Microsoft Kinect (July 2009).<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><\/p>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2010\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"hands-free-control-in-kinect\" class=\"moment__title\">Hands-free control in Kinect<\/h3>\n\n\n\n<p>Microsoft releases Kinect for Xbox 360, which includes the first hands-free open microphone command and control product with surround sound echo cancellation.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-1024x576.jpg\" alt=\"Kinect Voice Illustration 2010\" class=\"wp-image-684177\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/KinectVoice_2306_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/beamformer-design-using-measured-microphone-directivity-patterns-robustness-to-modelling-error\/\" data-bi-cN=\"Beamformer Design Using Measured Microphone Directivity Patterns: Robustness to Modelling Error\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Beamformer Design Using Measured Microphone Directivity Patterns: Robustness to Modelling Error<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/optimal-3d-beamforming-using-measured-microphone-directivity-patterns\/\" data-bi-cN=\"Optimal 3D Beamforming Using Measured Microphone Directivity Patterns\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Optimal 3D Beamforming Using Measured Microphone Directivity Patterns<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Video<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/sound-captureapplications-in-entertainment-and-gaming\/\" data-bi-cN=\"Sound Capture Applications in Entertainment and Gaming\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Sound Capture Applications in Entertainment and Gaming<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/data-driven-suppression-rule-for-speech-enhancement\/\" data-bi-cN=\"Data Driven Suppression Rule for Speech Enhancement\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Data Driven Suppression Rule for Speech Enhancement<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/kinect-development-kit-toolkit-gesture-speech-based-human-machine-interaction-2\/\" data-bi-cN=\"Kinect Development Kit: A Toolkit for Gesture- and Speech-Based Human-Machine Interaction\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Kinect Development Kit: A Toolkit for Gesture- and Speech-Based Human-Machine Interaction<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2016\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-releases-hololens\" class=\"moment__title\">Microsoft releases HoloLens<\/h3>\n\n\n\n<p>Microsoft releases HoloLens, which contains a four-element microphone array and a sophisticated sound capture and speech enhancement system for capturing the voice of the wearer and the ambient sound environment.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-1024x576.jpg\" alt=\"image of hands holding a Hololens device\" class=\"wp-image-684174\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Hololens_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2017\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"researchers-begin-exploring-neural-networks-for-speech-enhancement\" class=\"moment__title\">Researchers begin exploring neural networks for speech enhancement<\/h3>\n\n\n\n<p>In 2017, Microsoft researchers establish the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/nn-speech-enhancement\/\">Neural Networks-Based Speech Enhancement<\/a> project, which aims for more accurate and reliable speech processing, particularly on mobile, wearable, smart home and IoT devices \u2013 which, unlike previous devices, present new challenges such as noisier background environments, greater speaker-microphone distances, and limited edge processing abilities.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/causal-speech-enhancement-approach-combining-data-driven-learning-suppression-rule-estimation\/\" data-bi-cN=\"A Causal Speech Enhancement Approach Combining Data-driven Learning and Suppression Rule Estimation\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Causal Speech Enhancement Approach Combining Data-driven Learning and Suppression Rule Estimation<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-hybrid-approach-to-combining-conventional-and-deep-learning-techniques-for-single-channel-speech-enhancement-and-recognition\/\" data-bi-cN=\"A Hybrid Approach to Combining Conventional and Deep Learning Techniques for Single-channel Speech Enhancement and Recognition\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Hybrid Approach to Combining Conventional and Deep Learning Techniques for Single-channel Speech Enhancement and Recognition<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/convolutional-recurrent-neural-networks-for-speech-enhancement\/\" data-bi-cN=\"Convolutional-Recurrent Neural Networks for Speech Enhancement\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Convolutional-Recurrent Neural Networks for Speech Enhancement<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/constrained-convolutional-recurrent-networks-for-improve-speech-quality-with-low-impact-on-recognition-accuracy\/\" data-bi-cN=\"Constrained Convolutional-recurrent Networks to Improve Speech Quality with Low Impact on Recognition Accuracy\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Constrained Convolutional-recurrent Networks to Improve Speech Quality with Low Impact on Recognition Accuracy<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/limiting-numerical-precision-of-neural-networks-to-achieve-real-time-voice-activity-detection\/\" data-bi-cN=\"Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Limiting Numerical Precision of Neural Networks to Achieve Real-time Voice Activity Detection<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2019\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-releases-hololens-2\" class=\"moment__title\">Microsoft releases HoloLens 2<\/h3>\n\n\n\n<p>The device contains a five-element microphone array and sophisticated sound capture and speech enhancement system for capturing the voice of the wearer as well as the ambient sound environment. Researchers explored key components of its speech enhancement technology earlier in the year.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weighted-speech-distortion-losses-for-neural-network-based-real-time-speech-enhancement\/\" data-bi-cN=\"Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/acoustic-localization-using-spatial-probability-in-noisy-and-reverberant-environments\/\" data-bi-cN=\"Acoustic Localization using Spatial Probability in Noisy and Reverberant Environments\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Acoustic Localization using Spatial Probability in Noisy and Reverberant Environments<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2020\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"speech-enhancement-incorporated-into-microsoft-teams\" class=\"moment__title\">Speech enhancement incorporated into Microsoft Teams<\/h3>\n\n\n\n<p>Microsoft CEO Satya Nadella <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/posts\/satyanadella_tools-like-microsoft-teams-and-microsoft-activity-6646788172292456448-d0DD\/\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noopener noreferrer\">announces<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> that new improvements to Microsoft Teams will include a neural network-based speech enhancement algorithm.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"999693\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: Event Series<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-research-forum\/?OCID=msr_researchforum_MCR_Blog_Promo\" aria-label=\"Microsoft Research Forum\" data-bi-cN=\"Microsoft Research Forum\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/05\/Research-Forum-hero_1400x788.jpg\" alt=\"Research Forum | abstract background with colorful hexagons\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft Research Forum<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-forum\" class=\"large\">Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first four episodes on demand.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-research-forum\/?OCID=msr_researchforum_MCR_Blog_Promo\" aria-describedby=\"microsoft-research-forum\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft Research Forum\" target=\"_blank\">\n\t\t\t\t\t\t\tWatch on-demand\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 id=\"audio-for-collaboration-and-productivity\" class=\"alignwide has-text-align-wide\">Audio for collaboration and productivity<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t1991\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"first-audio-related-paper-published\" class=\"moment__title\">First audio-related paper published<\/h3>\n\n\n\n<p>Microsoft researchers publish their first audio-related <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/auto-summarization-of-audio-video-presentations\/\">paper<\/a>, on the automatic summarization of multimedia presentations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"555\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot-1024x555.png\" alt=\"audio and acoustics: 1991 software testing window UI\" class=\"wp-image-682302\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot-1024x555.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot-300x162.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot-768x416.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot-1536x832.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1991-software-testing-UI-screenshot.png 1590w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>The slides, shown on right, are synchronized with summary-segment transitions derived from the presentation at left.<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t1996\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"seeing-the-sound\" class=\"moment__title\">Seeing the sound<\/h3>\n\n\n\n<figure class=\"wp-block-image alignright size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"786\" height=\"575\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1996-vision-steered-audio-VR.png\" alt=\"audio and acoustics: man standing in front on a large VR screen with hands up\" class=\"wp-image-682305\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1996-vision-steered-audio-VR.png 786w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1996-vision-steered-audio-VR-300x219.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1996-vision-steered-audio-VR-768x562.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1996-vision-steered-audio-VR-80x60.png 80w\" sizes=\"auto, (max-width: 786px) 100vw, 786px\" \/><\/figure>\n\n\n\n<p>In 1996, Microsoft researchers explore ways to use vision data to capture and render sound in interactive environments.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/vision-steered-audio-interactive-environments\/\" data-bi-cN=\"Vision-Steered Audio for Interactive Environments\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Vision-Steered Audio for Interactive Environments<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div style=\"height:124px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t1999\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"progress-in-audio-detection-and-classification\" class=\"moment__title\">Progress in audio detection and classification<\/h3>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/detection-of-target-speakers-in-audio-databases-2\/\" data-bi-cN=\"Detection of target speakers in audio databases\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Detection of target speakers in audio databases<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This paper introduces technology to detect individual speakers in audio, which is later implemented in Microsoft RoundTable.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-robust-audio-classification-and-segmentation-method\/\" data-bi-cN=\"A Robust Audio Classification and Segmentation Method\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Robust Audio Classification and Segmentation Method<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This paper introduces robust audio classification and segmentation \u2013 which is used to distinguish speech, music, environmental noise, and silence.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2001\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-ringcam-established\" class=\"moment__title\">Project RingCam established<\/h3>\n\n\n\n<p>Microsoft researchers establish Project RingCam, to explore 360-degree videoconferencing.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-1024x576.jpg\" alt=\"audio and acoustics: project RingCam video conference screen\" class=\"wp-image-682311\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2001-RingCam-video-conference.jpg 1280w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/Cutler_DMeetings_ACMMM_02.pdf\" data-bi-cN=\"Distributed Meetings: A Meeting Capture and Broadcasting System\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Distributed Meetings: A Meeting Capture and Broadcasting System<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2007\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-roundtable-ships-with-speaker-detection-technology\" class=\"moment__title\">Microsoft RoundTable ships with speaker detection technology<\/h3>\n\n\n\n<figure class=\"wp-block-image alignleft size-medium\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"217\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_OfficeRoundtable_32849-300x217.jpg\" alt=\"photo of the Microsoft RoundTable video conferencing device\" class=\"wp-image-684744\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_OfficeRoundtable_32849-300x217.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_OfficeRoundtable_32849-1024x741.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_OfficeRoundtable_32849-768x556.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_OfficeRoundtable_32849.jpg 1400w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/figure>\n\n\n\n<p>Speaker detection technology developed by Microsoft researchers ships as part of the Microsoft Roundtable conferencing system.<\/p>\n\n\n\n<p>The technology is later sold to Polycom and released as the Polycom CX5000.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-dots\"\/>\n\n\n\n<h2 id=\"capturing-and-reproducing-sound\" class=\"alignwide has-text-align-wide\">Capturing and reproducing sound<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t1998\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"researchers-begin-experimenting-with-microphone-arrays\" class=\"moment__title\">Researchers begin experimenting with microphone arrays<\/h3>\n\n\n\n<p>Microsoft researchers build their first microphone array, using an Erector set.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"447\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-1024x447.jpg\" alt=\"audio and acoustics: prototype of microphone array\" class=\"wp-image-682308\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-1024x447.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-300x131.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-768x335.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-1536x671.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_1998_Large-Mic-Array-2048x894.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>This is one of the first prototypes of microphone arrays designed in the Signal Processing group by Rico Malvar and Dinei Fiorencio in 1998.<\/figcaption><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2005\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"usb-microphone-array-prototypes\" class=\"moment__title\">USB microphone array prototypes<\/h3>\n\n\n\n<p>Microsoft researchers establish the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/audio-devices\/\">Audio Devices<\/a> project, and build and evaluate two USB microphone array prototypes: a four element linear array and an eight element circular array.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2007\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"an-anechoic-chamber-in-building-99\" class=\"moment__title\">An anechoic chamber in Building 99<\/h3>\n\n\n\n<p>Microsoft Research Redmond moves into its new home in Building 99. The building includes the company&#8217;s first anechoic chamber.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"550\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber-1024x550.jpg\" alt=\"view of inside the anechoic chamber\" class=\"wp-image-682794\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber-1024x550.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber-300x161.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber-768x413.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber-710x380.jpg 710w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008-MSR-anechoic-chamber.jpg 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Key publications from 2007:<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/robust-design-of-wideband-loudspeaker-arrays\/\" data-bi-cN=\"Robust Design of Wideband Loudspeaker Arrays\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Robust Design of Wideband Loudspeaker Arrays<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/sound-capture-system-and-spatial-filter-for-small-devices\/\" data-bi-cN=\"Sound Capture System and Spatial Filter for Small Devices\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Sound Capture System and Spatial Filter for Small Devices<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2009\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"anechoic-chamber-retrofitted-to-measure-sound-in-3d\" class=\"moment__title\">Anechoic chamber retrofitted to measure sound in 3D<\/h3>\n\n\n\n<p>The anechoic chamber in Building 99 is retrofitted to automatically measure 3D directivity and radiation patterns, including human spatial hearing. It uses a 3D scanner with sub-millimeter accuracy to measure the head and torso. Among other things, this enables the advancement of head-related transfer functions (HRTFs), which can enable more realistic-sounding spatial audio.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"556\" height=\"366\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008_AnechoicChamberWithRigFor3Dscanning.jpg\" alt=\"The Microsoft Research anechoic chamber set for measuring human spatial hearing.\" class=\"wp-image-682374\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008_AnechoicChamberWithRigFor3Dscanning.jpg 556w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2008_AnechoicChamberWithRigFor3Dscanning-300x197.jpg 300w\" sizes=\"auto, (max-width: 556px) 100vw, 556px\" \/><figcaption>The Microsoft Research anechoic chamber set for measuring human spatial hearing.<\/figcaption><\/figure>\n\n\n\n<figure><div class=\"video-wrapper\"><iframe loading=\"lazy\" src=\"https:\/\/channel9.msdn.com\/Series\/CampusTours\/Microsoft-Campus-Tours-Microsoft-Research-Part-1-The-Anechoic-Chamber\/player\" width=\"960\" height=\"560\" allowfullscreen=\"\" frameborder=\"0\" title=\"Microsoft Campus Tours - Microsoft Research Part 1 - The Anechoic Chamber - Microsoft Channel 9 Video\"><\/iframe><\/div><figcaption>Microsoft Campus Tours \u2013 Microsoft Research Part 1 \u2013 The Anechoic Chamber<\/figcaption><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2012\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h2 id=\"progress-in-microphone-arrays\" class=\"moment__title\">Progress in microphone arrays<\/h2>\n\n\n\n<p>Microsoft researchers build a spherical 16 channel microphone array and a cylindrical 16 channel microphone array to study sound field decomposition using spherical and cylindrical functions. In 2016, they build a 64-channel spherical microphone array.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2017\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"a-new-approach-to-gesture-recognition\" class=\"moment__title\">A new approach to gesture recognition<\/h3>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/ultrasound-based-gesture-recognition-2\/\">Ultrasound-based Gesture Recognition<\/a> \u2013 This paper introduces a new approach to gesture recognition using ultrasound waves, which uses significantly less power than optical systems.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"238\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017-ultrasound-based-gesture-recognition-figures-1024x238.png\" alt=\"audio and acoustics: figures showing ultrasound based gesture recognition setup\" class=\"wp-image-683289\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017-ultrasound-based-gesture-recognition-figures-1024x238.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017-ultrasound-based-gesture-recognition-figures-300x70.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017-ultrasound-based-gesture-recognition-figures-768x179.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017-ultrasound-based-gesture-recognition-figures.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Figure 1: Left: Hardware set-up and close-up of the ultrasonic piezoelectric transducer at the center and an 8-element microphone array around it in a circular configuration.<br><br>Figure 2: Right: Block diagram of the proposed approach.<\/figcaption><\/figure>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hardware-and-algorithms-for-ultrasonic-depth-imaging\/\" data-bi-cN=\"Hardware and Algorithms for Ultrasonic Depth Imaging\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Hardware and Algorithms for Ultrasonic Depth Imaging<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/multimodal-gesture-recognition\/\" data-bi-cN=\"Multimodal Gesture Recognition\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Multimodal Gesture Recognition<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This paper further demonstrates live ultrasound sensing for gesture recognition.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2018\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"live-360-audio-and-video-streaming\" class=\"moment__title\">Live 360 audio and video streaming<\/h3>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Live 360 Audio and Video Streaming\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/hVrFALdY8Hc?feature=oembed&rel=0\" frameborder=\"0\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2019\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-denmark-established\" class=\"moment__title\">Project Denmark established<\/h3>\n\n\n\n<p>Microsoft researchers establish <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-denmark\/\">Project Denmark<\/a>, which aims to achieve high-quality capture of meeting conversations using virtual microphone arrays composed of ordinary consumer devices such as mobile phones and laptops.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 id=\"spatial-audio\" class=\"alignwide has-text-align-wide\">Spatial audio<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2012\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"new-directions-for-spatial-audio\" class=\"moment__title\">New directions for spatial audio<\/h3>\n\n\n\n<p>Microsoft researchers begin exploring new approaches to head-related transfer functions (HRTFs), which represent the acoustic transfer function from a sound source at a given location to the ear drums of a human. A potential consequence of this work is more realistic spatial audio that is tuned to the shape of the listener\u2019s head and torso.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hrtf-magnitude-modeling-using-a-non-regularized-least-squares-fit-of-spherical-harmonics-coefficients-on-incomplete-data\/\" data-bi-cN=\"HRTF Magnitude Modeling Using a Non-Regularized Least-Squares Fit of Spherical Harmonics Coefficients on Incomplete Data\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>HRTF Magnitude Modeling Using a Non-Regularized Least-Squares Fit of Spherical Harmonics Coefficients on Incomplete Data<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hrtf-magnitude-synthesis-via-sparse-representation-of-anthropometric-features\/\" data-bi-cN=\"HRTF Magnitude Synthesis via Sparse Representation of Anthropometric Features\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>HRTF Magnitude Synthesis via Sparse Representation of Anthropometric Features<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">The HRTF personalization used in HoloLens.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hrtf-phase-synthesis-via-sparse-representation-of-anthropometric-features\/\" data-bi-cN=\"HRTF Phase Synthesis via Sparse Representation of Anthropometric Features\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>HRTF Phase Synthesis via Sparse Representation of Anthropometric Features<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Blog<\/span>\n\t\t\t<a href=\"https:\/\/www.windowscentral.com\/microsoft-3d-audio-tech-makes-virtual-sounds-sound-real\" data-bi-cN=\"Microsoft 3D audio tech makes virtual sounds sound real\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Microsoft 3D audio tech makes virtual sounds sound real<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"3-D Audio Demo\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/wtFgLi6XKrY?feature=oembed&rel=0\" frameborder=\"0\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2015\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"virtual-surround-sound-in-windows-10\" class=\"moment__title\">Virtual surround sound in Windows 10<\/h3>\n\n\n\n<p>Microsoft releases Windows 10 with support for virtual surround sound, marketed as Windows Sonic. This spatial audio rendering system is later released as part of HoloLens.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/estimation-of-multipath-propagation-delays-and-interaural-time-differences-from-3-d-head-scans\/\" data-bi-cN=\"Estimation of Multipath Propagation Delays and Interaural Time Differences from 3-D Head Scans\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Estimation of Multipath Propagation Delays and Interaural Time Differences from 3-D Head Scans<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/applications-of-3d-spherical-transforms-to-personalization-of-head-related-transfer-functions\/\" data-bi-cN=\"Applications of 3D Spherical Transforms To Personalization Of Head-Related Transfer Functions\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Applications of 3D Spherical Transforms To Personalization Of Head-Related Transfer Functions<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2016\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"personalized-audio-rendering-in-hololens\" class=\"moment__title\">Personalized audio rendering in HoloLens<\/h3>\n\n\n\n<p>Microsoft releases HoloLens. The device features an audio rendering system with on-the-fly personalization of the wearer\u2019s spatial hearing.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2016\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-releases-the-windows-mixed-reality-platform\" class=\"moment__title\">Microsoft releases the Windows Mixed Reality platform<\/h3>\n\n\n\n<p>Windows 10 includes support for virtual and mixed reality headsets manufactured by other companies. The platform contains an extended and improved version of the spatial audio engine. <\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/head-related-transfer-function-personalization-needs-spatial-audio-mixed-virtual-reality\/\" data-bi-cN=\"Head-related transfer function personalization for the needs of spatial audio in mixed and virtual reality\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Head-related transfer function personalization for the needs of spatial audio in mixed and virtual reality<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2017\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"a-map-delivered-in-3d-sound\" class=\"moment__title\">A map delivered in 3D sound<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-1024x576.jpg\" alt=\"Man holding a smart phone, standing next to his guide dog\" class=\"wp-image-470751\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/02\/Soundscape-MCR-home-page-hero-1066-x-600-343x193.jpg 343w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-left\">Microsoft releases <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/product\/soundscape\/\">Soundscape<\/a> (in collaboration with Guide Dogs UK) \u2013 a helper app for visually impaired people, which includes a spatial audio rendering system. Read about the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/soundscape-maps-delivered-in-3d-sound\/\">research behind the product<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/blind-reverberation-time-estimation-using-a-convolutional-neural-network\/\" data-bi-cN=\"Blind reverberation time estimation using a convolutional neural network\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Blind reverberation time estimation using a convolutional neural network<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Video<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/microsoft-soundscape-map-delivered-3d-sound\/\" data-bi-cN=\"Microsoft Soundscape: A Map Delivered in 3D Sound\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Microsoft Soundscape: A Map Delivered in 3D Sound<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2018\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"podcast-hearing-in-3d-with-dr-ivan-tashev\" class=\"moment__title\">Podcast: Hearing in 3D with Dr. Ivan Tashev<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-1024x576.png\" alt=\"Ivan Tashev podcast\" class=\"wp-image-549597\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/Ivan-Tashev_Pod_Site_10_2018_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-audio\"><audio controls src=\"https:\/\/media.blubrry.com\/microsoftresearch\/b\/content.blubrry.com\/microsoftresearch\/msr_050_tashev.mp3\" preload=\"none\"><\/audio><figcaption><br>In this podcast, Dr. Tashev provides an overview of the quest for better sound processing and speech enhancement, describes the latest innovations in 3D audio, and explains why the research behind audio processing technology is, thanks to variations in human perception, equal parts science, art and craft.<\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>Key publications from 2018:<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-sparsity-measure-for-echo-density-growth-in-general-environments\/\" data-bi-cN=\"A Sparsity Measure for Echo Density Growth in General Environments\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Sparsity Measure for Echo Density Growth in General Environments<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/blind-room-volume-estimation-from-single-channel-noisy-speech\/\" data-bi-cN=\"Blind Room Volume Estimation from Single-channel Noisy Speech\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Blind Room Volume Estimation from Single-channel Noisy Speech<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/capture-representation-and-rendering-of-3d-audio-for-virtual-and-augmented-reality\/\" data-bi-cN=\"Capture, representation, and rendering of 3D audio for virtual and augmented reality\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Capture, representation, and rendering of 3D audio for virtual and augmented reality<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/improving-binaural-ambisonics-decoding-by-spherical-harmonics-domain-tapering-and-coloration-compensation\/\" data-bi-cN=\"Improving Binaural Ambisonics Decoding by Spherical Harmonics Domain Tapering and Coloration Compensation\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Improving Binaural Ambisonics Decoding by Spherical Harmonics Domain Tapering and Coloration Compensation<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/spectral-manipulation-improves-elevation-perception-with-non-individualized-head-related-transfer-functions\/\" data-bi-cN=\"Spectral manipulation improves elevation perception with non-individualized head-related transfer functions\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Spectral manipulation improves elevation perception with non-individualized head-related transfer functions<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n<hr class=\"wp-block-separator has-text-color has-green-color has-css-opacity has-green-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 id=\"acoustic-simulation\" class=\"alignwide has-text-align-wide\">Acoustic simulation<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2010\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"microsoft-researchers-establish-project-triton\" class=\"moment__title\">Microsoft researchers establish Project Triton<\/h3>\n\n\n\n<p>Prior to 2010, a key challenge in interactive audio had been the fast modeling of wave effects in complex game scenes: smooth sound obstruction around doorways, or dynamic reverberation, responsive to both source and listener motion. In the paper below, Microsoft researchers introduced the idea of pre-computing physically accurate wave simulations, and showed that it was a viable path forward for interactive audio and games.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-triton\/\">Project Triton<\/a> explores a physics-based approach to modeling virtual environments, for more realistic in-game audio.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/precomputed-wave-simulation-real-time-sound-propagation-dynamic-sources-complex-scenes\/\" data-bi-cN=\"Precomputed Wave Simulation for Real-Time Sound Propagation of Dynamic Sources in Complex Scenes\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Precomputed Wave Simulation for Real-Time Sound Propagation of Dynamic Sources in Complex Scenes<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2012\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"researchers-begin-collaboration-with-game-studios\" class=\"moment__title\">Researchers begin collaboration with game studios<\/h3>\n\n\n\n<p>Microsoft researchers begin collaborating with The Coalition Studio to incorporate this acoustic simulation work into Gears of War, transitioning from exploratory research to a targeted redesign focused on performance and flexibility.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>2013: The first working prototype of Project Triton is demonstrated internally.<\/li><li>2014: This <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/parametric-wave-field-coding-precomputed-sound-propagation\/\">paper<\/a> describes the core design of Project Triton, combining perceptual coding, spatial compression and parametric rendering. The design solves the problem of system resource usage, and integrates easily into existing audio tools. Later work has built on this core design, with various improvements.<\/li><li>2015: A Microsoft Research summer intern researches a <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adaptive-sampling-for-sound-propagation\/\">novel adaptive sampling approach<\/a> to resolve a key robustness issue in Project Triton.<\/li><\/ul>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adaptive-sampling-for-sound-propagation\/\" data-bi-cN=\"Adaptive Sampling For Sound Propagation\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Adaptive Sampling For Sound Propagation<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/parametric-wave-field-coding-precomputed-sound-propagation\/\" data-bi-cN=\"Parametric Wave Field Coding for Precomputed Sound Propagation\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Parametric Wave Field Coding for Precomputed Sound Propagation<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2016\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-triton-ships-in-gears-of-war-4\" class=\"moment__title\">Project Triton ships in Gears of War 4<\/h3>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-triton\/\">Project Triton<\/a> ships as part of Gears of War 4 \u2013 the first instance of game acoustics provided by accurate physics-based simulation.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Gears of War 4, Project Triton: Pre-Computed Environmental Wave Acoustics\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/qCUEGvIgco8?feature=oembed&rel=0\" frameborder=\"0\" allowfullscreen><\/iframe>\n<\/div><figcaption>GDC 2017 talk on Gears of War integration<\/figcaption><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2017\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-triton-in-virtual-and-mixed-reality\" class=\"moment__title\">Project Triton in Virtual and Mixed Reality <\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-1024x576.jpg\" alt=\"Screenshot of the Mixed Reality experience in Windows 10\" class=\"wp-image-687351\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Windows10-Mixed-Reality-screenshot.jpg 1485w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>After years of development and refinement for use in games, Project Triton is used in the Mixed Reality experience shipped as part of the Windows 10 Fall Creator\u2019s Update. It provides a natural acoustic experience in the virtual \u201ccliffhouse\u201d space, with new directional acoustics features such as sound that is obstructed by virtual objects, or heard as if coming around corners or through doorways. This experience also incorporates advances in HRTFs described in the previous timeline.<\/p>\n\n\n\n<p>In 2018, Project Triton ships as part of Sea of Thieves, the second game to incorporate this technology. The game included custom modifications for evaluating acoustics modularly, illustrating the flexibility of the system.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/spatial-audio-for-immersive-sound-propagation\/\" data-bi-cN=\"Parametric Directional Coding for Precomputed Sound Propagation\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Parametric Directional Coding for Precomputed Sound Propagation<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">This SIGGRAPH paper describes improvements to Triton for encoding and rendering directional acoustic effects.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2019\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"podcast-project-triton-and-the-physics-of-sound-with-dr-nikunj-raghuvanshi\" class=\"moment__title\">Podcast: Project Triton and the Physics of Sound with Dr. Nikunj Raghuvanshi<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-1024x576.png\" alt=\"Nikunj Raghuvanshi\" class=\"wp-image-573663\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/03\/Nikunj-Raghuvanshi-POD_Site_02_2019_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-audio\"><audio controls src=\"https:\/\/media.blubrry.com\/microsoftresearch\/b\/content.blubrry.com\/microsoftresearch\/msr_068_raghuvanshi.mp3\" preload=\"none\"><\/audio><figcaption><br>In this podcast, Dr. Raghuvanshi wants you to hear how sound really travels \u2013 in rooms, around corners, behind walls, out doors \u2013 and he\u2019s using computational physics to do it.<\/figcaption><\/figure>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2019\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-triton-technology-released-as-project-acoustics\" class=\"moment__title\">Project Triton technology released as Project Acoustics<\/h3>\n\n\n\n<p>Microsoft makes Project Triton technology available to developers as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/docs.microsoft.com\/en-us\/gaming\/acoustics\/what-is-acoustics\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noopener noreferrer\">Project Acoustics<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, including Unity and Unreal plugins for easy integration into games and research prototypes.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Video<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/project-triton-making-waves-with-acoustics\/\" data-bi-cN=\"Project Acoustics: Making Waves with Triton\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Project Acoustics: Making Waves with Triton<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Talk<\/span>\n\t\t\t<a href=\"https:\/\/www.youtube.com\/watch?v=uY4G-GUAQIE\" data-bi-cN=\"Project Acoustics | Game Developers Conference 2019\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Project Acoustics | Game Developers Conference 2019<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<ul class=\"wp-block-list\"><li>2019: Gears of War 5 ships, with an immersive audio experience that combines headphone rendering technologies such as Windows Sonic and Dolby Atmos with Triton\u2019s scene-informed sound propagation.<\/li><li>2019: Borderlands 3 ships. This is the first game studio outside Microsoft to employ Project Triton.<\/li><\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"577\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-1024x577.png\" alt=\"Screenshot of Borderlands 3 game\" class=\"wp-image-687375\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-1024x577.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-768x433.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_2017_Borderlands3-screenshot.png 1416w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2020\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"project-acoustics-incorporated-into-hololens\" class=\"moment__title\">Project Acoustics incorporated into HoloLens<\/h3>\n\n\n\n<p>This milestone marks the first demonstration of physical acoustics in augmented reality.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cloud-enabled-interactive-sound-propagation-for-untethered-mixed-reality\/\" data-bi-cN=\"Cloud-Enabled Interactive Sound Propagation for Untethered Mixed Reality\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Cloud-Enabled Interactive Sound Propagation for Untethered Mixed Reality<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Talk<\/span>\n\t\t\t<a href=\"https:\/\/aka.ms\/mracoustics\" data-bi-cN=\"Using Project Acoustics with HoloLens 2\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Using Project Acoustics with HoloLens 2<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2020\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-1024x576.png\" alt=\"Webinar with Nikunj Raghuvanshi\" class=\"wp-image-682326\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/11\/MSR_Nikunj_Raghuvanshi_Webinar_Hero_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In this <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/note.microsoft.com\/MSR-Webinar-Sound-Simulation-Registration-Live.html\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noopener noreferrer\">webinar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Principal Researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikunjr\/\">Dr. Nikunj Raghuvanshi<\/a> covers the ins and outs of creating practical, high-quality sound simulations. It includes an overview of the three components of sound simulation: synthesis, propagation, and spatialization, as well as a focus on <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-triton\/\">Project Triton<\/a>. For each, he will review the underlying physics, research techniques, practical considerations, and open research questions.<\/p>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n<hr class=\"wp-block-separator has-text-color has-teal-color has-css-opacity has-teal-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 id=\"audio-analytics\" class=\"alignwide has-text-align-wide\">Audio analytics<\/h2>\n\n\n\t<div class=\"wp-block-msr-block-journey journey journey--date alignwide\" data-bi-aN=\"block-journey\">\n\t\t<ol class=\"journey__list\">\n\t\t\t\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2010\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"audio-analytics-project-established\" class=\"moment__title\">Audio Analytics project established<\/h3>\n\n\n\n<p>Microsoft researchers establish the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/audio-analytics\/\">Audio Analytics<\/a> project, to explore research directions such as extracting non-verbal cues from human speech, detecting specific audio events and background noise, and audio search and retrieval. Potential applications include customer satisfaction analysis from customer support calls, media content analysis and retrieval, medical diagnostic aids and patient monitoring, assistive technologies for people with hearing impairments, and audio analysis for public safety.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-new-speaker-identification-algorithm-for-gaming-scenarios\/\" data-bi-cN=\"A New Speaker Identification Algorithm for Gaming Scenarios\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A New Speaker Identification Algorithm for Gaming Scenarios<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/speech-emotion-recognition-using-deep-neural-network-and-extreme-learning-machine\/\" data-bi-cN=\"Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/high-level-feature-representation-using-recurrent-neural-network-for-speech-emotion-recognition\/\" data-bi-cN=\"High-level Feature Representation using Recurrent Neural Network for Speech Emotion Recognition\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>High-level Feature Representation using Recurrent Neural Network for Speech Emotion Recognition<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\n\t<li class=\"wp-block-msr-block-moment moment has-date\" data-bi-aN=\"block-moment\">\n\t\t<div class=\"moment__dot moment__dot--start\" role=\"presentation\"><\/div>\n\t\t<div role=\"presentation\"><\/div>\n\t\t<div class=\"moment__details\">\n\t\t\t\t\t\t<div class=\"moment__counter\"><\/div>\n\t\t\t\t\t\t\t<div class=\"moment__date-year\">\n\t\t\t\t\t2015\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t<div class=\"moment__content\">\n\t\t\t\n\n<h3 id=\"hey-cortana-uses-speaker-identification\" class=\"moment__title\">&#8220;Hey, Cortana&#8221; uses speaker identification<\/h3>\n\n\n\n<p>Microsoft releases Windows 10 with speaker identification as part of the \u201cHey, Cortana\u201d wake-up feature.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-utterance-level-representations-speech-emotion-agegender-recognition-using-deep-neural-networks\/\" data-bi-cN=\"Learning Utterance-level Representations for Speech Emotion and Age\/Gender Recognition Using Deep Neural Networks\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Learning Utterance-level Representations for Speech Emotion and Age\/Gender Recognition Using Deep Neural Networks<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-cross-modal-audio-search-engine-based-on-joint-audio-text-embeddings\/\" data-bi-cN=\"A Cross-modal Audio Search Engine based on Joint Audio-Text Embeddings\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A Cross-modal Audio Search Engine based on Joint Audio-Text Embeddings<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/supervised-deep-hashing-for-efficient-audio-event-retrieval\/\" data-bi-cN=\"Supervised Deep Hashing for Efficient Audio Event Retrieval\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>Supervised Deep Hashing for Efficient Audio Event Retrieval<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n\n\t\t<\/div>\n\t\t<div class=\"moment__dot moment__dot--end\" role=\"presentation\"><\/div>\n\t<\/li>\n\t\n\t\t<\/ol>\n\t<\/div>\n\t\n\n\n<p><a href=\"#top\">Back to top ><\/a><\/p>\n\n\n<p><!-- \/wp:post-content --><\/p>\n<p><!-- \/wp:msr\/block-journey --><\/p>\n<p><!-- \/wp:msr\/block-journey --><\/p>\n<p><!-- wp:paragraph --><\/p>\n<p><!-- \/wp:paragraph --><\/p>","protected":false},"excerpt":{"rendered":"<p>Getting the sound right is a crucial ingredient in natural user interfaces, immersive gaming, realistic virtual and mixed reality, and ubiquitous computing.<\/p>\n","protected":false},"author":38127,"featured_media":682323,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194468,1,244017],"tags":[],"research-area":[243062,13552],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-681651","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-audio","category-research-blog","category-research-collection","msr-research-area-audio-acoustics","msr-research-area-hardware-devices","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144923],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-960x540.jpg\" class=\"img-object-cover\" alt=\"audio and acoustics: woman and man setting up a dummy in anachoic chamber\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1024x576.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/08\/Acoustics_anachoic-chamber_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"August 12, 2020","formattedExcerpt":"Getting the sound right is a crucial ingredient in natural user interfaces, immersive gaming, realistic virtual and mixed reality, and ubiquitous computing.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/681651","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38127"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=681651"}],"version-history":[{"count":190,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/681651\/revisions"}],"predecessor-version":[{"id":894468,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/681651\/revisions\/894468"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/682323"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=681651"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=681651"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=681651"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=681651"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=681651"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=681651"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=681651"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=681651"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=681651"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=681651"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=681651"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}