{"id":144923,"date":"2011-10-01T00:14:10","date_gmt":"2011-10-01T07:14:10","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/group\/audio-and-acoustics-research-group\/"},"modified":"2026-01-29T15:01:50","modified_gmt":"2026-01-29T23:01:50","slug":"audio-and-acoustics-research-group","status":"publish","type":"msr-group","link":"https:\/\/www.microsoft.com\/en-us\/research\/group\/audio-and-acoustics-research-group\/","title":{"rendered":"Audio and Acoustics Research Group"},"content":{"rendered":"<section class=\"mb-3 moray-highlight\">\n\t<div class=\"card-img-overlay mx-lg-0\">\n\t\t<div class=\"card-background  has-background-gable-green card-background--inset-right\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1715\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-scaled.jpg\" class=\"attachment-full size-full\" alt=\"Audio and Acoustics Research Group\" style=\"\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-scaled.jpg 2560w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-300x201.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-1024x686.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-768x514.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-1536x1029.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-2048x1372.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/EB2A6816-240x161.jpg 240w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/>\t\t<\/div>\n\t\t<!-- Foreground -->\n\t\t<div class=\"card-foreground d-flex mt-md-n5 my-lg-5 px-g px-lg-0\">\n\t\t\t<!-- Container -->\n\t\t\t<div class=\"container d-flex mt-md-n5 my-lg-5 align-self-center\">\n\t\t\t\t<!-- Card wrapper -->\n\t\t\t\t<div class=\"w-100 w-lg-col-5\">\n\t\t\t\t\t<!-- Card -->\n\t\t\t\t\t<div class=\"card material-md-card py-5 px-md-5\">\n\t\t\t\t\t\t<div class=\"card-body \">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\" class=\"icon-link icon-link--reverse mb-2\" data-bi-cN=\"Return to Microsoft Research Lab - Redmond\">\n\t\t\t\t\t\t\t\t\t<span class=\"c-glyph glyph-chevron-left\" aria-hidden=\"true\"><\/span>\n\t\t\t\t\t\t\t\t\tReturn to Microsoft Research Lab &#8211; Redmond\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n<h1 class=\"wp-block-heading h2\" id=\"audio-and-acoustics-research-group\">Audio and Acoustics Research Group<\/h1>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n<p class=\"has-text-align-left\">The Audio and Acoustics group conducts research in audio processing and speech enhancement, 3D audio perception and technologies, devices for audio capture and rendering, array processing, information extraction from audio signals.<\/p>\n\n\n\n<p class=\"has-text-align-left\">The mission of the Audio and Acoustics Group is to develop state&nbsp;of the art algorithms and designs for audio processing, speech enhancement, 3D audio&nbsp;capture and rendering. We also work on the better&nbsp;acoustical design of&nbsp;audio devices, such as microphones and loudspeakers. We conduct research in the area of information retrieval from audio signals, such as speaker identification, emotion detection, etc. Our goal is to create technologies enabling natural interaction with computers with speech and audio. At the same time, we try to impact Microsoft&#8217;s current and future offerings in these areas.<\/p>\n\n\n\n<p class=\"has-text-align-left\">Contact for the Audio and Acoustics Research Group is <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ivantash\/\" target=\"_blank\" rel=\"noreferrer noopener\">Ivan Tashev<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n\n\n<p>Deeksha Moodasarige Shama, Johns Hopkins University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/brain-signals-to-action-monitoring-and-explaining-user-cognitive-load-with-foundation-models\/\">BRAIN SIGNALS TO ACTION: Monitoring and Explaining User Cognitive Load with Foundation Models<\/a>.<\/p>\n\n\n\n<p>Margarita Geleta, UC Berkeley, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/spatial-audio-rendering-for-speech-live-translation\/\">Spatial Audio Rendering for Speech Live Translation<\/a>.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Parthasaarathy Sudarsanam, Tampere University, Finland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/foa-tokenizer-learning-discrete-representations-of-spatial-audio-with-multichannel-vq-gan\/\">FOA Tokenizer: Learning Discrete Representations of Spatial Audio with Multichannel VQ-GAN<\/a>.<\/p>\n\n\n\n<p>Tsun-An Hsieh, University of Illinois Urbana-Champaign, USA. Towards Real-Time Generative Speech Restoration with Causal Flow Matching.<\/p>\n\n\n\n<p>Xilin Jiang, Columbia University New York, USA. Teaching spatial audio understanding to a LLM.<\/p>\n\n\n\n<p>Yinghao Ma, Queen Mary University of London, UK. Generative Audio at the Speed of Interaction: Exploring Flow Matching for Controllable Music Synthesis.<\/p>\n\n\n\n\n\n<p>Ali Vosoughi, University of Rochester, New York, USA. Audio-Video Learning from Unlabeled Data by Leveraging Multimodal LLMs.<\/p>\n\n\n\n<p>Benjamin Stahl, University of Music and Performing Arts Graz, Austria. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/final-intern-talk-distilling-self-supervised-learning-based-speech-quality-assessment-into-compact\/\">Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact Models<\/a>.<\/p>\n\n\n\n<p>Elisabeth Heremans, KU Leuven, Belgium. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/shining-light-on-the-learning-brain-estimating-mental-workload-in-a-simulated-flight-task-using-opt\/\">Shining light on the learning brain: Estimating mental workload in a simulated flight task using optical f-NIRS signals<\/a>.<\/p>\n\n\n\n<p>Gene-Ping Yang, University of Edinburgh, UK. Distributed asynchronous device speech enhancement using microphone permutation and number invariant windowed cross attention.<\/p>\n\n\n\n<p>Haibin Wu, National Taiwan University, Taiwan. Towards ultra-low latency speech enhancement &#8211; A comprehensive study.<\/p>\n\n\n\n<p>Jinhua Liang, Queen Mary University of London, UK. Audio-Visual Representation Learning and Generation in the Latent Space.<\/p>\n\n\n\n<p>Shivam Mehta. KTH Royal Institute of Technology, Stockholm, Sweden. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/make-some-noise-teaching-the-language-of-audio-to-an-llm-using-sound-tokens\/\">Make some noise: Teaching the language of audio to an LLM using sound tokens<\/a>.<\/p>\n\n\n\n\n\n<p>Ard Kastrati, ETH Zurich, Switzerland. Decoding Neurophysiological Responses for Improving Predictive Text Systems using Brain-Computer Interfaces.<\/p>\n\n\n\n<p>Azalea Gui, University of Toronto, Canada. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/final-intern-talk-improving-frechet-audio-distance-for-generative-music-evaluation\/\">Improving Frechet Audio Distance for Generative Music Evaluation<\/a>.<\/p>\n\n\n\n<p>Eloi Moliner Juanpere, Aalto University in Espoo, Finland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/msr-talk-unsupervised-speech-reverberation-control-with-diffusion-implicit-bridges\/\">Unsupervised Speech Reverberation Control with Diffusion Implicit Bridges<\/a>.<\/p>\n\n\n\n<p>Michele Mancusi, Sapienza &#8211; University of Rome, Italy. Unsupervised Speech Separation Using Adversarial Loss and Additional Separation Losses.<\/p>\n\n\n\n<p>Ruihan Yang, University of California \u2013 Irvine, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=fpWH0JZJvsU\">Synchronized Audio-Visual Generation with a Joint Generative Diffusion Model and Contrastive Loss<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Tanmay Srivastava, Stony Brook University, USA. Private and Accessible Speech Commands in Head-Worn Devices.<\/p>\n\n\n\n<p>Yuanchao Li, University of Edinburgh, UK. A Comparative Study of Audio Encoders for Emotion in Real and Synthesized Music: Advancing Realistic Emotion Generation.<\/p>\n\n\n\n\n\n<p>Haleh Akrami, University of Southern California, CA, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=U5hcROugVkM\">Semi-supervised multi-task learning for acoustic parameter estimation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Jeremy Hyrkas, University of California, CA, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=MjLoVCkyRbY\">Binaural spatial audio positioning in video calls<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Julian Neri, McGill University, Montreal, Canada. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=-X2mdbmKEM8\">Real-Time Single-Channel Speech Separation in Noisy and Reverberant Environments<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Justin Kilmarx, The University of Texas at Austin, USA. Mapping the neural representational similarity of multiple object categories during visual imagery.<\/p>\n\n\n\n<p>Khandokar Md. Nayem, Indiana University, Bloomington, IN, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=_ggfv6eMIJs\">Unified Speech Enhancement Approach for Speech Degradations and Noise Suppression<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Sandeep Reddy Kothinti, The Johns Hopkins University, USA. Automated Audio Captioning: Methods and Metrics for Natural Language Description of Sounds.<\/p>\n\n\n\n<p>Sophia Mehdizadeh, Georgia Tech, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/microsoft.sharepoint.com\/teams\/MSRNVA01\/_layouts\/15\/stream.aspx?id=%2Fteams%2FMSRNVA01%2FVideos%2F56903%2Emp4\">Improving text prediction accuracy using neurophysiology<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Tan Gemicioglu, Georgia Tech, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/microsoft.sharepoint.com\/teams\/Resnet\/Lists\/Events\/SingleItem.aspx?FilterField1=ID&FilterValue1=56910\">Tongue Gesture Recognition in Head Mounted Displays.<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n\n\n<p>Wei-Cheng Lin, University of Texas at Dallas, USA. Toxic Speech and Speech Emotions: Investigations of Audio-based Modeling Methodology and Intercorrelations.<\/p>\n\n\n\n<p>Shoken Kaneko, University of Maryland, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/diablo-a-deep-individual\/\">DIABLo: a Deep Individual-Agnostic Binaural Localizer<\/a>.<\/p>\n\n\n\n<p>Justin Kilmarx, University of Texas at Austin, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/developing-a-brain-computer-interface-based-on-visual-imagery\/\">Developing a Brain-Computer Interface Based on Visual Imagery<\/a>.<\/p>\n\n\n\n<p>Viet Anh Trinh, City University of New York (CUNY), USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/unsupervised-speech-enhancement\/\">Unsupervised Speech Enhancement<\/a>.<\/p>\n\n\n\n<p>Abu-Zaher Faridee, University of Maryland, USA. Non-Intrusive Multi-Task Speech Quality Assessment.<\/p>\n\n\n\n\n\n<p>Ali Aroudi, University of Oldenburg, Germany. Geometry-constrained Beamforming Network for end-to-end Farfield Sound Source Separation.<\/p>\n\n\n\n<p>Kuan-Jung Chiang, University of California \u2013 San Diego, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-closed-loop-adaptive-brain-computer-interface-framework\/\">A Closed-loop Adaptive Brain-computer Interface framework<\/a>.<\/p>\n\n\n\n<p>Midia Yousefi, University of Texas, Dallas, USA. Audio-based Toxic Language Detection.<\/p>\n\n\n\n<p>Shoken Kaneko, University of Maryland, College Park, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/resnet.microsoft.com\/video\/45240\">Forest Sound Scene Simulation and Bird Localization with Distributed Microphone Arrays<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<p>Wenkang An, Carnegie Mellon University, USA. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/resnet.microsoft.com\/video\/45256\">Decoding Music Attention from \u201cEEG headphones\u201d: a User-friendly Auditory Brain-computer Interface<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n\n\n<p>Arindam Jati, University of Southern California (USC), Los Angeles, USA. Supervised Deep Hashing for Efficient Audio Retrieval.<\/p>\n\n\n\n<p>Benjamin Martinez Elizalde, Carnegie Mellon University, USA. Sound event recognition for video-content analysis.<\/p>\n\n\n\n<p>Fabian Brinkmann, Technical University of Berlin, Germany. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/efficient-and-perceptually-plausible-3-d-sound-for-virtual-reality\/\">Efficient and Perceptually Plausible 3-D Sound for Virtual Reality<\/a>.<\/p>\n\n\n\n<p>Hakim Si Mohammed, INRIA Rennes, France. Improving the Ergonomics and User-Friendliness of SSVEP-based BCIs in Virtual Reality.<\/p>\n\n\n\n<p>Md Tamzeed Islam, University of North Carolina at Chapel Hill, USA. Anthropometric Feature Estimation using Sensors on Headphone for HRTF Personalization.<\/p>\n\n\n\n<p>Morayo Ogunsina, Penn State Erie, USA. Hearing AI App for Sound-Based User Surrounding Awareness.<\/p>\n\n\n\n<p>Nicholas Huang, Johns Hopkins University, USA. Decoding Auditory Attention Via the Auditory Steady-State Response for Use in A Brain-Computer Interface.<\/p>\n\n\n\n<p>Sahar Hashemgeloogerdi, University of Rochester, USA. Integrating Beamforming and Multichannel Linear Prediction for Dereverberation and Denoising.<\/p>\n\n\n\n<p>Wenkang An, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/decoding-multisensory-attention-from-electroencephalography-for-use-in-a-brain-computer-interface\/\">Decoding Multisensory Attention from Electroencephalography for Use in a Brain-Computer Interface<\/a>.<\/p>\n\n\n\n<p>Yangyang (Raymond) Xia, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/real-time-single-channel-speech-enhancement-with-recurrent-neural-networks\/\">Real-time Single-channel Speech Enhancement with Recurrent Neural Networks<\/a>.<\/p>\n\n\n\n\n\n<p>Anderson Avila, Institut National de la Recherche Scientifique (INRS-EMT), Canada. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/deep-neural-network-models-for-audio-quality-assessment\/\">Deep Neural Network Models for Audio Quality Assessment<\/a>.<\/p>\n\n\n\n<p>Andrea Genovese, New York University Steinhardt, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/blind-room-parameter-estimation-in-real-time-from-single-channel-audio-signals-in-noisy-conditions\/\">Blind Room Parameter Estimation in Real Time from Single-Channel Audio Signals in Noisy Conditions<\/a>.<\/p>\n\n\n\n<p>Benjamin Martinez Elizalde, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-cross-modal-audio-search-engine-based-on-joint-audio-text-embeddings\/\">A Cross-modal Audio Search Engine based on Joint Audio-Text Embeddings<\/a>.<\/p>\n\n\n\n<p>Chen Song, University at Buffalo, the State University of New York, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/sensor-fusion-for-learning-based-motion-estimation-in-vr\/\">Sensor Fusion for Learning-based Motion Estimation in VR<\/a>.<\/p>\n\n\n\n<p>Christoph F. Hold, Technische Universit\u00e4t Berlin, Germany. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/improvements-on-higher-order-ambisonics-reproduction-in-the-spherical-harmonics-domain-under-real-time-constraints\/\">Improvements on Higher Order Ambisonics Reproduction in the Spherical Harmonics Domain Under Real-time Constraints<\/a>.<\/p>\n\n\n\n<p>Harishchandra Dubey, University of Texas at Dallas, USA. MSR-Freesound: Advancing Audio Event Detection & Classification through Efficient Deep Learning Approaches.<\/p>\n\n\n\n<p>Sebastian Braun, Friedrich-Alexander University of Erlangen Nuremberg (FAU), Germany. Speech Enhancement Using Linear and Non-linear Spatial Filtering for Head-mounted Displays.<\/p>\n\n\n\n\n\n<p>Etienne Thuillier, Aalto University, Finland. Spatial Audio Feature Discovery Using a Neural Network Classifier.<\/p>\n\n\n\n<p>Xuesu Xiao, Texas A&M University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/articulated-human-pose-tracking-inertial-sensors\/\" target=\"_blank\" rel=\"noreferrer noopener\">Articulated Human Pose Tracking with Inertial Sensors<\/a>.<\/p>\n\n\n\n<p>Srinivas Parthasarathy, University of Texas at Dallas, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/speech-emotion-recognition-convolutional-neural-networks\/\" target=\"_blank\" rel=\"noreferrer noopener\">Speech Emotion Recognition with Convolutional Neural Networks<\/a>.<\/p>\n\n\n\n<p>Han Zhao, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/high-accuracy-neural-network-models-speech-enhancement\/\" target=\"_blank\" rel=\"noreferrer noopener\">High-Accuracy Neural-Network Models for Speech Enhancement<\/a>.<\/p>\n\n\n\n<p>Jong Hwan Ko, Georgia Institute of Technology, USA. Efficient Neural-Network Design for Real-Time Speech Enhancement.<\/p>\n\n\n\n<p>Rasool Fakoor, University of Texas at Arlington, USA. Speech Enhancement With and Without Gradient Descent.<\/p>\n\n\n\n<p>Yan-hui Tu, University of Science and Technology of China, P. R. China. Regression Based Speech Enhancement with Neural Networks.<\/p>\n\n\n\n\n\n<p>Amit Das, University of Illinois at Urbana-Champaign, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/ultrasound-based-gesture-recognition\/\" target=\"_blank\" rel=\"noreferrer noopener\">Ultrasound Based Gesture Recognition<\/a>.<\/p>\n\n\n\n<p>Vani Rajendran, University of Oxford, UK. Simple Effects that Enhance the Elevation Perception in Spatial Sound.<\/p>\n\n\n\n<p>Zhong-Qiu Wang, Ohio State University. Emotion, gender, and age recognition from speech utterances using neural networks.<\/p>\n\n\n\n\n\n<p>Archontis Politis, Aalto University, Finland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/applications-of-3-dimensional-spherical-transforms-to-acoustics-and-personalization-of-head-related-transfer-functions-hrtfs\/\" target=\"_blank\" rel=\"noreferrer noopener\">Applications of 3-Dimensional Spherical Transforms to Acoustics and Personalization of Head-related Transfer Functions (HRTFs)<\/a>.<\/p>\n\n\n\n<p>Supreeth Krishna Rao,&nbsp;Worcester Polytechnic Institute, USA. Ultrasound Doppler Radar.<\/p>\n\n\n\n<p>Seyedmahdad Mirsamadi, University of Texas at Dallas, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/dnn-based-online-speech-enhancement-using-multitask-learning-and-suppression-rule-estimation\/\" target=\"_blank\" rel=\"noreferrer noopener\">DNN-based Online Speech Enhancement Using Multitask Learning and Suppression Rule Estimation<\/a>.<\/p>\n\n\n\n<p>Long Le, University of Illinois at Urbana-Champaign, USA. Spatial Probability for Sound Source Localization.<\/p>\n\n\n\n\n\n<p>Jinkyu Lee, Yonsei University, Korea. Emotion Detection from Speech Signals.<\/p>\n\n\n\n<p>Felicia Lim,&nbsp;Imperial College London, UK. Blind Estimation of Reverberation Parameters.<\/p>\n\n\n\n\n\n<p>Ivan Dokmanic, EPFL, Switzerland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/ultrasound-depth-imaging\/\" target=\"_blank\" rel=\"noreferrer noopener\">Ultrasound Depth Imaging<\/a>.<\/p>\n\n\n\n<p>Piotr Bilinski, INRIA, France. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/hrtf-personalization-using-anthropometric-features\/\" target=\"_blank\" rel=\"noreferrer noopener\">HRTF Personalization Using Anthropometric Features<\/a>.<\/p>\n\n\n\n<p>Kun Han, Ohio State University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/emotion-detection-from-speech-signals\/\" target=\"_blank\" rel=\"noreferrer noopener\">Emotion Detection from Speech Signals<\/a>.<\/p>\n\n\n\n\n\n<p>Keith Godin, University of Texas at Dallas, USA. Open-set Speaker Identification on Noisy, Short Utterances.<\/p>\n\n\n\n<p>Jason Wung, Georgia Tech, USA. Next Steps in Multi-Channel Acoustic Echo reduction for Xbox Kinect.<\/p>\n\n\n\n<p>Xing Li, University of Washington, USA. Dynamic Loudness Control for In-Car Audio.<\/p>\n\n\n\n\n\n<p>Keith Godin, University of Texas at Dallas, USA. Binaural Sound Source Localization.<\/p>\n\n\n\n\n\n<p>Hoang Do, Brown University, USA. A Step Towards NUI: Speaker Verification for Gaming Scenarios.<\/p>\n\n\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"536\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2.jpg\" alt=\"a group of people posing for a photo\" class=\"wp-image-1149571\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2-300x167.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2-768x428.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/Thumbnail-DSC_9142-AARG-official-2-960x536.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"on-the-salmon-road\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/vimeo.com\/1108134563\/c322df5e51\">Acquiring natural sounds<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>July 14, 2025<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/vimeo.com\/1036839920\/473a9f3bdb\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2.jpg\" alt=\"Ivan Tashev et al. posing for a photo next to a boat\" class=\"wp-image-1110765\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/team-life-whales_v2-960x539.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"on-the-salmon-road\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/1036839920\/473a9f3bdb\" target=\"_blank\" rel=\"noopener noreferrer\">Listening to the Whales<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>June 27, 2024<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/vimeo.com\/943362273\/00b4c8a65a\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses.jpg\" alt=\"Audio and Acoustics Research Group, \"Taste and smell: complimentary senses\" morale event on March 22, 2024\" class=\"wp-image-1038492\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/Taste-smell-complimentary-senses-960x539.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"on-the-salmon-road\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/943362273\/00b4c8a65a\" target=\"_blank\" rel=\"noopener noreferrer\">Taste and smell: complimentary senses<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>March 22, 2024<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/vimeo.com\/860248716\/64b7fcf1b0\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road.jpg\" alt=\"Audio and Acoustics Research Group, \"On the salmon road\" morale event on June 30, 2023\" class=\"wp-image-966333\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-on-the-salmon-road-960x539.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"on-the-salmon-road\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/860248716\/64b7fcf1b0\" target=\"_blank\" rel=\"noopener noreferrer\">On the salmon road<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>June 30, 2023<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/855513483\/ec25ea04a6\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds.jpg\" alt=\"Audio and Acoustics Research Group, \"Clicking sounds and pulsating noise\" morale event on March 24, 2023\" class=\"wp-image-966330\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2023-Clicking-sounds-960x539.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"clicking-sounds-and-pulsating-noise\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/855513483\/ec25ea04a6\" target=\"_blank\" rel=\"noopener noreferrer\">Clicking sounds and pulsating noise<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>March 24, 2023<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/855473992\/ed2e8eb3d3\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips.jpg\" alt=\"Audio and Acoustics Research Group, \"Horses and tulips\" morale event on April 1, 2022\" class=\"wp-image-966336\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-240x134.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/2022-Horses-and-tulips-960x539.jpg 960w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"horses-and-tulips\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/855473992\/ed2e8eb3d3\" target=\"_blank\" rel=\"noopener noreferrer\">Horses and tulips<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>April 1, 2022<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/382658377\/4b7e3bd87a\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"810\" height=\"454\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019.jpg\" alt=\"Audio and Acoustics Research Group, \"Surface hydroacoustics\" morale event on August 1, 2019\" class=\"wp-image-966378\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019.jpg 810w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Surface-hydroacoustics-2019-640x360.jpg 640w\" sizes=\"auto, (max-width: 810px) 100vw, 810px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"surface-hydroacoustics\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/382658377\/4b7e3bd87a\" target=\"_blank\" rel=\"noopener noreferrer\">Surface hydroacoustics<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nAugust 1, 2019 <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/382653843\/abe54a2c74\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"827\" height=\"463\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019.jpg\" alt=\"Audio and Acoustics Research Group, \"Underwater and Underground Sounds\" morale event on April 17, 2019\" class=\"wp-image-966396\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019.jpg 827w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/underwater-underground-sounds-2019-240x134.jpg 240w\" sizes=\"auto, (max-width: 827px) 100vw, 827px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"underwater-and-underground-sounds\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/382653843\/abe54a2c74\" target=\"_blank\" rel=\"noopener noreferrer\">Underwater and Underground Sounds<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nApril 17, 2019 <\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-style-default\"><a href=\"https:\/\/vimeo.com\/282789525\/5cd9c3c0e2\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation.jpg\" alt=\"Audio and Acoustics Research Group, \"Waves propagation\" morale event on June 15, 2018\" class=\"wp-image-557415\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation-343x193.jpg 343w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"waves-propagation\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/282789525\/5cd9c3c0e2\" target=\"_blank\" rel=\"noopener noreferrer\">Waves propagation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nJune 15, 2018 <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/277894197\/1a4bbc51aa\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter.jpg\" alt=\"Audio and Acoustics Research Group in the snow for \"Frozen sound\" morale event on February 26, 2018\" class=\"wp-image-555825\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter-343x193.jpg 343w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"frozen-sound\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/277894197\/1a4bbc51aa\" target=\"_blank\" rel=\"noopener noreferrer\">Frozen sound<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nFebruary 26, 2018 <\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/229149919\/a344785ddb\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"962\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust.jpg\" alt=\"Audio and Acoustics Research Group on ATVs for \"Sounds in the dust\" morale event on July 28, 2017\" class=\"wp-image-476079\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust.jpg 962w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust-768x430.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust-343x193.jpg 343w\" sizes=\"auto, (max-width: 962px) 100vw, 962px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"sounds-in-the-dust\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/229149919\/a344785ddb\" target=\"_blank\" rel=\"noopener noreferrer\">Sounds in the Dust<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nJuly 28, 2017 <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/244417164\/e5ef82d6bb\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses.jpg\" alt=\"Audio and Acoustics Research Group horseback riding on Orcas Island for morale event on August 1, 2016\" class=\"wp-image-476082\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses-768x431.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses-343x193.jpg 343w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"horseback-riding-on-orcas-island\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/244417164\/e5ef82d6bb\" target=\"_blank\" rel=\"noopener noreferrer\">Horseback Riding on Orcas Island<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nAugust 1, 2016 <\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/244418783\/46aa99cb5b\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"961\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting.jpg\" alt=\"Audio and Acoustics Research Group white water rafting summer trip in 2015\" class=\"wp-image-476085\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting.jpg 961w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting-343x193.jpg 343w\" sizes=\"auto, (max-width: 961px) 100vw, 961px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"white-water-rafting-summer-trip\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/244418783\/46aa99cb5b\" target=\"_blank\" rel=\"noopener noreferrer\">White water rafting summer trip<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nJune 24, 2015 <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/245562581\/8666235c1d\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"541\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale.jpg\" alt=\"Audio and Acoustics Research Group whale watching summer trip in 2014\" class=\"wp-image-476091\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale-768x433.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale-343x193.jpg 343w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"whale-watching-summer-trip\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/245562581\/8666235c1d\" target=\"_blank\" rel=\"noopener noreferrer\">Whale watching summer trip<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nJuly 28, 2014 <\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/vimeo.com\/246714787\/4e21f04419\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"539\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing.jpg\" alt=\"Audio and Acoustics Research Group skiing for morale event on March 13, 2014\" class=\"wp-image-476088\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing-300x168.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing-768x431.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing-343x193.jpg 343w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"audio-on-ski-or-skiing-audio\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vimeo.com\/246714787\/4e21f04419\" target=\"_blank\" rel=\"noopener noreferrer\">Audio on ski \u2026 or skiing Audio<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n\n\n\n<p>\nMarch 13, 2014 <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n","protected":false},"excerpt":{"rendered":"<p>The Audio and Acoustics group conducts research in audio processing and speech enhancement, 3D audio perception and technologies, devices for audio capture and rendering, array processing, information extraction from audio signals.<\/p>\n","protected":false},"featured_media":887814,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_group_start":"2011-04-05","footnotes":""},"research-area":[13556,243062,13552],"msr-group-type":[243694],"msr-locale":[268875],"msr-impact-theme":[],"class_list":["post-144923","msr-group","type-msr-group","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-audio-acoustics","msr-research-area-hardware-devices","msr-group-type-group","msr-locale-en_us"],"msr_group_start":"2011-04-05","msr_detailed_description":"","msr_further_details":"","msr_hero_images":[],"msr_research_lab":[199565,1161007],"related-researchers":[{"type":"user_nicename","display_name":"Sebastian Braun","user_id":37688,"people_section":"Team members","alias":"sebraun"},{"type":"user_nicename","display_name":"Dimitra Emmanouilidou","user_id":37461,"people_section":"Team members","alias":"diemmano"},{"type":"user_nicename","display_name":"Hannes Gamper","user_id":31943,"people_section":"Team members","alias":"hagamper"},{"type":"user_nicename","display_name":"David Johnston","user_id":31562,"people_section":"Team members","alias":"davidjo"},{"type":"user_nicename","display_name":"Ivan Tashev","user_id":32127,"people_section":"Team members","alias":"ivantash"},{"type":"user_nicename","display_name":"Ed Cutrell","user_id":31490,"people_section":"Collaborators and affiliates","alias":"cutrell"},{"type":"user_nicename","display_name":"Johannes Gehrke","user_id":32364,"people_section":"Collaborators and affiliates","alias":"johannes"},{"type":"user_nicename","display_name":"Eric Horvitz","user_id":32033,"people_section":"Collaborators and affiliates","alias":"horvitz"},{"type":"user_nicename","display_name":"Rico Malvar","user_id":32786,"people_section":"Collaborators and affiliates","alias":"malvar"},{"type":"user_nicename","display_name":"Nikunj Raghuvanshi","user_id":33106,"people_section":"Collaborators and affiliates","alias":"nikunjr"},{"type":"user_nicename","display_name":"Andy Wilson","user_id":31159,"people_section":"Collaborators and affiliates","alias":"awilson"}],"related-publications":[454383,486468,466413,480927,466377,466398,466422,449847,441264,466431,466449,369605,347000,372032,346982,371261,372026,371249,372020,274161,274497,274125,274458,245471,371972,245708,238131,238132,377081,244142,245462,377132,168888,244088,168755,244148,168450,168449,168299,168298,168297,168451,167783,167965,167055,167515,167542,167506,166853,166735,166682,166725,166727,168452,166734,168453,165801,165800,244130,165624,250184,164327,166120,164093,164059,164058,166126,244082,163085,162754,244061,245444,162755,274479,162161,161872,253745,245441,244094,244115,161420,245525,160244,159878,159389,245582,158121,245570,158122,253751,158123,155561,155560,155559,244118,155562,245399,156678,155563,167053,167052,167054,155568,155567,155566,155569,245609,155570,156679,167051,155571,155572,155573,155574,155575,155576,155577,155578,155951,155954,158120,166572,437388,437400,489146,507317,544869,573123,574440,574680,578248,581593,582358,582376,582475,611670,611697,612726,612741,616803,618039,642027,658839,658848,658857,661485,664644,687735,697225,697996,703480,754294,754306,754324,754333,758989,759025,763384,763438,764146,768106,768115,786796,787132,803650,810181,815314,816130,820753,830983,854880,863991,864003,864012,864021,885597,885624,885639,887826,890886,915651,916755,924168,927873,951393,970326,970338,979095,982389,982416,982425,983700,1016412,1050846,1065057,1084854,1087872,1089270,1090497,1102137,1135498,1136786,1136799,1136805,1136811,1136940,1143710,1144144,1151553,1155862,1155866,1155868,1155873,1160906,1160911,1160913,1161156],"related-downloads":[],"related-videos":[185671,185697,185719,185905,186536,186630,189076,191147,192776,422931,422949,189804,189850,189952,192713,285926,424950,302630,474945,608088,983793,1156573,1152383,1080114,1033716,983946,983931,983913,668202,253673,504884,505685,544005,544665,544887,609111,665607,668193,692004,709252,742930,806653,814282,846103,969852,983769],"related-projects":[559086,558039,661380,364265,430830,212079,488189],"related-events":[],"related-opportunities":[1169668],"related-posts":[384,305930,549591,585187,681651,720673,978693,984711,1000779,1019925,1059168,1089033,1136909],"tab-content":[{"id":0,"name":"Interns","content":"<strong>2021\r\n<\/strong>Wei-Cheng Lin, University of Texas at Dallas, USA. Toxic Speech and Speech Emotions: Investigations of Audio-based Modeling Methodology and Intercorrelations.\r\nShoken Kaneko, University of Maryland, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/diablo-a-deep-individual\/\">DIABLo: a Deep Individual-Agnostic Binaural Localizer<\/a>.\r\nJustin Kilmarx, University of Texas at Austin, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/developing-a-brain-computer-interface-based-on-visual-imagery\/\">Developing a Brain-Computer Interface Based on Visual Imagery<\/a>.\r\nViet Anh Trinh, City University of New York (CUNY), USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/unsupervised-speech-enhancement\/\">Unsupervised Speech Enhancement<\/a>.\r\nAbu-Zaher Faridee, University of Maryland, USA. Non-Intrusive Multi-Task Speech Quality Assessment.\r\n\r\n<strong>2020<\/strong>\r\nAli Aroudi, University of Oldenburg, Germany. Geometry-constrained Beamforming Network for end-to-end Farfield Sound Source Separation.\r\nKuan-Jung Chiang, University of California \u2013 San Diego, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-closed-loop-adaptive-brain-computer-interface-framework\/\">A Closed-loop Adaptive Brain-computer Interface framework<\/a>.\r\nMidia Yousefi, University of Texas, Dallas, USA. Audio-based Toxic Language Detection.\r\nShoken Kaneko, University of Maryland, College Park, USA. <a href=\"https:\/\/resnet.microsoft.com\/video\/45240\">Forest Sound Scene Simulation and Bird Localization with Distributed Microphone Arrays<\/a>.\r\nWenkang An, Carnegie Mellon University, USA. <a href=\"https:\/\/resnet.microsoft.com\/video\/45256\">Decoding Music Attention from \u201cEEG headphones\u201d: a User-friendly Auditory Brain-computer Interface<\/a>.\r\n\r\n<strong>2019<\/strong>\r\nArindam Jati, University of Southern California (USC), Los Angeles, USA. Supervised Deep Hashing for Efficient Audio Retrieval.\r\nBenjamin Martinez Elizalde, Carnegie Mellon University, USA. Sound event recognition for video-content analysis.\r\nFabian Brinkmann, Technical University of Berlin, Germany. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/efficient-and-perceptually-plausible-3-d-sound-for-virtual-reality\/\">Efficient and Perceptually Plausible 3-D Sound for Virtual Reality<\/a>.\r\nHakim Si Mohammed, INRIA Rennes, France. Improving the Ergonomics and User-Friendliness of SSVEP-based BCIs in Virtual Reality.\r\nMd Tamzeed Islam, University of North Carolina at Chapel Hill, USA. Anthropometric Feature Estimation using Sensors on Headphone for HRTF Personalization.\r\nMorayo Ogunsina, Penn State Erie, USA. Hearing AI App for Sound-Based User Surrounding Awareness.\r\nNicholas Huang, Johns Hopkins University, USA. Decoding Auditory Attention Via the Auditory Steady-State Response for Use in A Brain-Computer Interface.\r\nSahar Hashemgeloogerdi, University of Rochester, USA. Integrating Beamforming and Multichannel Linear Prediction for Dereverberation and Denoising.\r\nWenkang An, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/decoding-multisensory-attention-from-electroencephalography-for-use-in-a-brain-computer-interface\/\">Decoding Multisensory Attention from Electroencephalography for Use in a Brain-Computer Interface<\/a>.\r\nYangyang (Raymond) Xia, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/real-time-single-channel-speech-enhancement-with-recurrent-neural-networks\/\">Real-time Single-channel Speech Enhancement with Recurrent Neural Networks<\/a>.\r\n\r\n<strong>2018<\/strong>\r\nAnderson Avila, Institut National de la Recherche Scientifique (INRS-EMT), Canada. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/deep-neural-network-models-for-audio-quality-assessment\/\">Deep Neural Network Models for Audio Quality Assessment<\/a>.\r\nAndrea Genovese, New York University Steinhardt, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/blind-room-parameter-estimation-in-real-time-from-single-channel-audio-signals-in-noisy-conditions\/\">Blind Room Parameter Estimation in Real Time from Single-Channel Audio Signals in Noisy Conditions<\/a>.\r\nBenjamin Martinez Elizalde, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-cross-modal-audio-search-engine-based-on-joint-audio-text-embeddings\/\">A Cross-modal Audio Search Engine based on Joint Audio-Text Embeddings<\/a>.\r\nChen Song, University at Buffalo, the State University of New York, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/sensor-fusion-for-learning-based-motion-estimation-in-vr\/\">Sensor Fusion for Learning-based Motion Estimation in VR<\/a>.\r\nChristoph F. Hold, Technische Universit\u00e4t Berlin, Germany. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/improvements-on-higher-order-ambisonics-reproduction-in-the-spherical-harmonics-domain-under-real-time-constraints\/\">Improvements on Higher Order Ambisonics Reproduction in the Spherical Harmonics Domain Under Real-time Constraints<\/a>.\r\nHarishchandra Dubey, University of Texas at Dallas, USA. MSR-Freesound: Advancing Audio Event Detection &amp; Classification through Efficient Deep Learning Approaches.\r\nSebastian Braun, Friedrich-Alexander University of Erlangen Nuremberg (FAU), Germany. Speech Enhancement Using Linear and Non-linear Spatial Filtering for Head-mounted Displays.\r\n\r\n<b>2017\r\n<\/b>Etienne Thuillier, Aalto University, Finland. Spatial Audio Feature Discovery Using a Neural Network Classifier.\r\nXuesu Xiao, Texas A&amp;M University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/articulated-human-pose-tracking-inertial-sensors\/\" target=\"_blank\" rel=\"noopener\">Articulated Human Pose Tracking with Inertial Sensors<\/a>.\r\nSrinivas Parthasarathy, University of Texas at Dallas, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/speech-emotion-recognition-convolutional-neural-networks\/\" target=\"_blank\" rel=\"noopener\">Speech Emotion Recognition with Convolutional Neural Networks<\/a>.\r\nHan Zhao, Carnegie Mellon University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/high-accuracy-neural-network-models-speech-enhancement\/\" target=\"_blank\" rel=\"noopener\">High-Accuracy Neural-Network Models for Speech Enhancement<\/a>.\r\nJong Hwan Ko, Georgia Institute of Technology, USA. Efficient Neural-Network Design for Real-Time Speech Enhancement.\r\nRasool Fakoor, University of Texas at Arlington, USA. Speech Enhancement With and Without Gradient Descent.\r\nYan-hui Tu, University of Science and Technology of China, P. R. China. Regression Based Speech Enhancement with Neural Networks.\r\n\r\n<strong>2016<\/strong>\r\nAmit Das, University of Illinois at Urbana-Champaign, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/ultrasound-based-gesture-recognition\/\" target=\"_blank\" rel=\"noopener\">Ultrasound Based Gesture Recognition<\/a>.\r\nVani Rajendran, University of Oxford, UK. Simple Effects that Enhance the Elevation Perception in Spatial Sound.\r\nZhong-Qiu Wang, Ohio State University. Emotion, gender, and age recognition from speech utterances using neural networks.\r\n\r\n<strong>2015<\/strong>\r\nArchontis Politis, Aalto University, Finland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/applications-of-3-dimensional-spherical-transforms-to-acoustics-and-personalization-of-head-related-transfer-functions-hrtfs\/\" target=\"_blank\" rel=\"noopener\">Applications of 3-Dimensional Spherical Transforms to Acoustics and Personalization of Head-related Transfer Functions (HRTFs)<\/a>.\r\nSupreeth Krishna Rao,\u00a0Worcester Polytechnic Institute, USA. Ultrasound Doppler Radar.\r\nSeyedmahdad Mirsamadi, University of Texas at Dallas, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/dnn-based-online-speech-enhancement-using-multitask-learning-and-suppression-rule-estimation\/\" target=\"_blank\" rel=\"noopener\">DNN-based Online Speech Enhancement Using Multitask Learning and Suppression Rule Estimation<\/a>.\r\nLong Le, University of Illinois at Urbana-Champaign, USA. Spatial Probability for Sound Source Localization.\r\n\r\n<strong>2014<\/strong>\r\nJinkyu Lee, Yonsei University, Korea. Emotion Detection from Speech Signals.\r\nFelicia Lim,\u00a0Imperial College London, UK. Blind Estimation of Reverberation Parameters.\r\n\r\n<strong>2013<\/strong>\r\nIvan Dokmanic, EPFL, Switzerland. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/ultrasound-depth-imaging\/\" target=\"_blank\" rel=\"noopener\">Ultrasound Depth Imaging<\/a>.\r\nPiotr Bilinski, INRIA, France. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/hrtf-personalization-using-anthropometric-features\/\" target=\"_blank\" rel=\"noopener\">HRTF Personalization Using Anthropometric Features<\/a>.\r\nKun Han, Ohio State University, USA. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/emotion-detection-from-speech-signals\/\" target=\"_blank\" rel=\"noopener\">Emotion Detection from Speech Signals<\/a>.\r\n\r\n<strong>2012<\/strong>\r\nKeith Godin, University of Texas at Dallas, USA. Open-set Speaker Identification on Noisy, Short Utterances.\r\nJason Wung, Georgia Tech, USA. Next Steps in Multi-Channel Acoustic Echo reduction for Xbox Kinect.\r\nXing Li, University of Washington, USA. Dynamic Loudness Control for In-Car Audio.\r\n\r\n<strong>2011<\/strong>\r\nKeith Godin, University of Texas at Dallas, USA. Binaural Sound Source Localization.\r\n\r\n<strong>2010<\/strong>\r\nHoang Do, Brown University, USA. A Step Towards NUI: Speaker Verification for Gaming Scenarios."},{"id":1,"name":"In the News","content":"<div><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/hearing-in-3d-with-dr-ivan-tashev\/\"> Hearing in 3D with Dr. Ivan Tashev <\/a><\/div>\r\n<div>Microsoft Research Podcast, November 14, 2018<\/div>\r\n<div><a href=\"https:\/\/beyondtomorrow.dk\/\" target=\"_blank\" rel=\"noopener\"> Beyond Tomorrow \u2013 A vision study by Br\u00fcel &amp; Kj\u00e6r<\/a><\/div>\r\n<div>Beyond Tomorrow, December 4, 2017<\/div>\r\n<div><em>Ivan Tashev is on the expert panel<\/em><\/div>\r\n<div><a href=\"https:\/\/vrscout.com\/news\/sound-secret-sauce-immersive-experiences\/\" target=\"_blank\" rel=\"noopener\"> Is Sound the Secret Sauce for Making Immersive Experiences? <\/a><\/div>\r\n<div>VRScout, October 24, 2017<\/div>\r\n<div><a href=\"http:\/\/www.sciencetimes.com\/articles\/17340\/20170628\/listeners-seeing-what-they-hear-virtual-reality-3d-acoustics-integration.htm\" target=\"_blank\" rel=\"noopener\"> Listeners Seeing What They Hear: Virtual Reality &amp; 3D Acoustics Integration<\/a><\/div>\r\n<div>The Science Times, June 28, 2017<\/div>\r\n<div><a href=\"https:\/\/www.thetimes.co.uk\/article\/3d-sound-to-let-you-hear-walking-dead-zombies-first-sbx9zplt7\" target=\"_blank\" rel=\"noopener\"> 3D sound to let you hear Walking Dead zombies first<\/a><\/div>\r\n<div>The Times, June 27, 2017<\/div>\r\n<div><a href=\"https:\/\/phys.org\/news\/2017-06-functions-personalize-audio-virtual-reality.html\" target=\"_blank\" rel=\"noopener\"> Researchers use head related transfer functions to personalize audio in mixed and virtual reality<\/a><\/div>\r\n<div>Phys.org, June 26, 2017<\/div>\r\n<div><a href=\"https:\/\/www.sciencedaily.com\/releases\/2017\/06\/170626105826.htm\" target=\"_blank\" rel=\"noopener\"> Creating a personalized, immersive audio environment<\/a><\/div>\r\n<div>ScienceDaily, June 26, 2017<\/div>\r\n<div><a href=\"https:\/\/vimeo.com\/246716129\/13fefa8a4d\" target=\"_blank\" rel=\"noopener\"> Be There: 3D Audio Virtual Presence (video)<\/a><\/div>\r\n<div>TechFest, March 25, 2015<\/div>\r\n<div><a href=\"https:\/\/www.theguardian.com\/artanddesign\/architecture-design-blog\/2014\/nov\/07\/microsoft-headset-blind-3d-gps-guide-dogs\" target=\"_blank\" rel=\"noopener\"> Headset provides '3D soundscape' to help blind people navigate cities<\/a><\/div>\r\n<div>The Guardian, November 7, 2014<\/div>\r\n<div><a href=\"https:\/\/www.telegraph.co.uk\/technology\/news\/11210926\/How-3D-audio-technology-could-unlock-cities-for-blind-people.html\" target=\"_blank\" rel=\"noopener\"> How 3D audio technology could 'unlock' cities for blind people<\/a><\/div>\r\n<div>The Telegraph, November 6, 2014<\/div>\r\n<div><a href=\"https:\/\/youtu.be\/OOgrSvnmhrY\" target=\"_blank\" rel=\"noopener\"> Cities Unlocked: Lighting up the world through sound (video) <\/a><\/div>\r\n<div>Microsoft UK, November 6, 2014<\/div>\r\n<div><a href=\"https:\/\/www.engadget.com\/2016\/11\/02\/microsoft-exclusive-hololens-spatial-sound\/\" target=\"_blank\" rel=\"noopener\"> 3D audio is the secret to HoloLens' convincing holograms<\/a><\/div>\r\n<div>Engadget, November 2, 2016<\/div>\r\n<div><a href=\"https:\/\/singularityhub.com\/2014\/09\/28\/virtual-reality-may-become-the-next-great-media-platform-but-can-it-fool-all-five-senses\/#sm.0001is4stg15p1d4fv5opxn9s5nva\" target=\"_blank\" rel=\"noopener\"> Virtual Reality May Become the Next Great Media Platform\u2014But Can It Fool All Five Senses?<\/a><\/div>\r\n<div>Singularity Hub, September 28, 2014<\/div>\r\n<div><a href=\"https:\/\/singularityhub.com\/2014\/07\/06\/virtual-reality-needs-an-immersive-3d-soundscape\/#sm.0001is4stg15p1d4fv5opxn9s5nva\" target=\"_blank\" rel=\"noopener\"> What\u2019s Missing from Virtual Reality? Immersive 3D Soundscapes<\/a><\/div>\r\n<div>Singularity Hub, July 6, 2014<\/div>\r\n<div><a href=\"https:\/\/www.technologyreview.com\/s\/527826\/microsofts-3-d-audio-gives-virtual-objects-a-voice\/\" target=\"_blank\" rel=\"noopener\"> Microsoft\u2019s \u201c3-D Audio\u201d Gives Virtual Objects a Voice<\/a><\/div>\r\n<div>MIT Technology Review, June 4, 2014<\/div>\r\n<div><a href=\"https:\/\/www.windowscentral.com\/microsoft-3d-audio-tech-makes-virtual-sounds-sound-real\" target=\"_blank\" rel=\"noopener\"> Microsoft 3D audio tech makes virtual sounds sound real<\/a><\/div>\r\n<div>Windows Central, June 4, 2014<\/div>\r\n<div><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/audio-advances-help-xbox-one-determine-signal-from-noise\/\" target=\"_blank\" rel=\"noopener\"> Audio Advances Help Xbox One Determine Signal from Noise<\/a><\/div>\r\n<div>Microsoft Research Blog, October 16, 2013<\/div>\r\n<div><a href=\"https:\/\/channel9.msdn.com\/Series\/Microsoft-Research-Luminaries\/ Ivan-Tashev-Helps-Makes-Microsoft-Sound-Great\" target=\"_blank\" rel=\"noopener\">Ivan Tashev Helps Makes Microsoft Sound\u00a0Great (video)<\/a><\/div>\r\n<div>Microsoft Research Luminaries,\u00a0October 16, 2013<\/div>\r\n<div><a href=\"https:\/\/www.youtube.com\/watch?v=_y2xpXiArCY\" target=\"_blank\" rel=\"noopener\"> Keynote - Ivan Tashev Optimizing Kinect: Audio and Acoustics (video)<\/a><\/div>\r\n<div>ITA 2012,\u00a0February 14, 2012<\/div>\r\n<div><a href=\"https:\/\/blogs.microsoft.com\/ai\/tellme-and-the-voice-of-kinect\/\" target=\"_blank\" rel=\"noopener\"> Tellme and the Voice of Kinect<\/a><\/div>\r\n<div>Microsoft - The AI Blog, August 1, 2011<\/div>\r\n<div><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/kinect-audio-preparedness-pays-off\/\" target=\"_blank\" rel=\"noopener\"> Kinect Audio: Preparedness Pays Off<\/a><\/div>\r\n<div>Microsoft Research Blog, April 14, 2011<\/div>\r\n<div><a href=\"https:\/\/channel9.msdn.com\/Events\/Ch9Live\/MIX11\/C9L208\" target=\"_blank\" rel=\"noopener\"> MSR NUI Panel with Curtis Wong &amp; Ivan\u00a0Tashev (video)<\/a><\/div>\r\n<div>Channel 9 Live at MIX11, April 13,\u00a02011<\/div>\r\n<div><a href=\"https:\/\/channel9.msdn.com\/Events\/MIX\/MIX11\/RES01\" target=\"_blank\" rel=\"noopener\"> Audio for Kinect: From Idea to \"Xbox,\u00a0Play!\" (video)<\/a><\/div>\r\n<div>MIX11, March 15,\u00a02011<\/div>"},{"id":2,"name":"Team Life","content":"[row][column class=\"m-col-12-24\"] <img class=\"wp-image-632877 alignnone\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/2019-Surface-hydroacoustics-300x182.jpg\" alt=\"\" width=\"573\" height=\"348\" \/>\r\n<h4><a href=\"https:\/\/vimeo.com\/382658377\/4b7e3bd87a\">Surface hydroacoustics<\/a><\/h4>\r\nAugust 1, 2019 [\/column] [column class=\"m-col-12-24\"] <img class=\"alignnone wp-image-632880\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/2019-Underwater-and-Underground-Sounds-300x174.jpg\" alt=\"\" width=\"600\" height=\"348\" \/>\r\n<h4><a href=\"https:\/\/vimeo.com\/382653843\/abe54a2c74\">Underwater and Underground Sounds<\/a><\/h4>\r\nApril 17, 2019 [\/column][\/row] [row][column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/282789525\/5cd9c3c0e2\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-557415\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/waves-propagation.jpg\" alt=\"Waves propagation\" width=\"600\" height=\"336\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/282789525\/5cd9c3c0e2\" target=\"_blank\" rel=\"noopener\">Waves propagation<\/a><\/h4>\r\nJune 15, 2018 [\/column] [column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/277894197\/1a4bbc51aa\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-555825\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/12\/winter.jpg\" alt=\"Frozen sound\" width=\"600\" height=\"337\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/277894197\/1a4bbc51aa\" target=\"_blank\" rel=\"noopener\">Frozen sound<\/a><\/h4>\r\nFebruary 26, 2018 [\/column][\/row] [row][column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/229149919\/a344785ddb\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-476079\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/dust.jpg\" alt=\"Sounds in the Dust\" width=\"600\" height=\"336\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/229149919\/a344785ddb\" target=\"_blank\" rel=\"noopener\">Sounds in the Dust<\/a><\/h4>\r\nJuly 28, 2017 [\/column] [column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/244417164\/e5ef82d6bb\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-476082\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/horses.jpg\" alt=\"\" width=\"600\" height=\"337\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/244417164\/e5ef82d6bb\" target=\"_blank\" rel=\"noopener\">Horseback Riding on Orcas Island<\/a><\/h4>\r\nAugust 1, 2016 [\/column][\/row] [row][column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/244418783\/46aa99cb5b\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-476085\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/rafting.jpg\" alt=\"\" width=\"600\" height=\"337\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/244418783\/46aa99cb5b\" target=\"_blank\" rel=\"noopener\">White water rafting summer trip<\/a><\/h4>\r\nJune 24, 2015 [\/column] [column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/245562581\/8666235c1d\" target=\"_blank\" rel=\"noopener\"><img class=\"wp-image-476091 alignnone\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/whale.jpg\" alt=\"\" width=\"600\" height=\"338\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/245562581\/8666235c1d\" target=\"_blank\" rel=\"noopener\">Whale watching summer trip<\/a><\/h4>\r\nJuly 28, 2014 [\/column][\/row] [row][column class=\"m-col-12-24\"] <a href=\"https:\/\/vimeo.com\/246714787\/4e21f04419\" target=\"_blank\" rel=\"noopener\"><img class=\"alignnone wp-image-476088\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/10\/skiing.jpg\" alt=\"\" width=\"600\" height=\"337\" \/><\/a>\r\n<h4><a href=\"https:\/\/vimeo.com\/246714787\/4e21f04419\" target=\"_blank\" rel=\"noopener\">Audio on ski \u2026 or skiing Audio<\/a><\/h4>\r\nMarch 13, 2014 [\/column][\/row]"}],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/144923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-group"}],"version-history":[{"count":63,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/144923\/revisions"}],"predecessor-version":[{"id":1159273,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/144923\/revisions\/1159273"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/887814"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=144923"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=144923"},{"taxonomy":"msr-group-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group-type?post=144923"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=144923"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=144923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}