{"id":716434,"date":"2021-01-12T09:43:47","date_gmt":"2021-01-12T17:43:47","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-academic-program&#038;p=716434"},"modified":"2025-04-22T05:23:12","modified_gmt":"2025-04-22T12:23:12","slug":"acoustic-echo-cancellation-challenge-interspeech-2021","status":"publish","type":"msr-academic-program","link":"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/acoustic-echo-cancellation-challenge-interspeech-2021\/","title":{"rendered":"Acoustic Echo Cancellation Challenge \u2013 INTERSPEECH 2021"},"content":{"rendered":"\n\n<p><\/p>\n\n\n\n\n\n\n<p><strong>Program dates:<\/strong> January 2021 &#8211; June 2021<\/p>\n<p>The\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.interspeech2021.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">INTERSPEECH 2021<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of acoustic echo cancellation (AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems. Many recent AEC studies report good performance on synthetic datasets where the training and testing data come from the same underlying distribution. However, the AEC performance often degrades significantly on real recordings. Also, most of the conventional objective metrics such as echo return loss enhancement (ERLE) and perceptual evaluation of speech quality (PESQ) do not correlate well with subjective speech quality tests in the presence of background noise and reverberation found in realistic environments.<\/p>\n<p>In this challenge, we open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 5,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We also open source an online subjective test framework and provide an online objective metric service for researchers to quickly test their results. The winners of this challenge will be selected based on the average Mean Opinion Score (MOS) achieved across all different single talk and double talk scenarios.<\/p>\n<p>&nbsp;<\/p>\n<h3>Submission instructions<\/h3>\n<p>Please use\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cmt3.research.microsoft.com\/AECCINTERSPEECH2021\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft Conference Management Toolkit\u00a0<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>for submitting the results. After logging in, complete the following steps to submit the results:<\/p>\n<ol>\n<li>Choose &#8220;Create new submission&#8221; in the Author Console.<\/li>\n<li>Enter title, abstract and co-authors, and upload a\u00a0<i>lastname<\/i>.txt file (can be empty or contain additional information regarding the submission).<\/li>\n<li>Compress the enhanced results files to a single\u00a0<i>lastname<\/i>.zip file, retaining the same folder and file names as the blind test set (max file size: 350 MB).<\/li>\n<li><u>After creating the submission<\/u>, return to the &#8220;Author Console&#8221; (by clicking on &#8220;Submissions&#8221; at the top of the page) and upload the\u00a0<i>lastname<\/i>.zip file via &#8220;Upload Supplementary Material&#8221;.<\/li>\n<\/ol>\n<p><strong>Submission deadline:<\/strong> March 15, 2021, 11:59pm (anywhere on Earth)<\/p>\n<p><strong>Contact us:<\/strong> For questions, please contact <a href=\"mailto:aec_challenge@microsoft.com\">aec_challenge@microsoft.com<\/a><\/p>\n\n\n\n\n\n<h2><b>Official Rules<\/b><\/h2>\n<h3>SPONSOR<\/h3>\n<p>These Official Rules (\u201cRules\u201d) govern the operation of the Microsoft INTERSPEECH 2021 AEC (see overview) Event Contest (\u201cContest\u201d). Microsoft Corporation, One Microsoft Way, Redmond, WA, 98052, USA, is the Contest sponsor (\u201cSponsor\u201d).<\/p>\n<h3>DEFINITIONS<\/h3>\n<p>In these Rules, \u201cMicrosoft\u201d, \u201cwe\u201d, \u201cour\u201d, and \u201cus\u201d, refer to Sponsor and \u201cyou\u201d and \u201cyourself\u201d refers to a Contest participant, or the parent\/legal guardian of any Contest entrant who has not reached the age of majority to contractually obligate themselves in their legal place of residence. \u201cEvent\u201d refers to the INTERSPEECH 2021 AEC (see overview) event held in Toronto, Canada (the \u201cEvent\u201d). By entering you (your parent\/legal guardian if you are not the age of majority in your legal place of residence) agree to be bound by these Rules.<\/p>\n<h3>ENTRY PERIOD<\/h3>\n<p>The Contest will operate from January 8, 2021 to March 22, 2021 (\u201cEntry Period\u201d). The Entry Period is divided into several periods as described in section How to Enter.<\/p>\n<h3>ELIGIBILITY<\/h3>\n<p>Open to any registered Event attendee 18 years of age or older. If you are 18 years of age or older but have not reached the age of majority in your legal place of residence, then you must have consent of a parent\/legal guardian. Employees and directors of Microsoft Corporation and its subsidiaries, affiliates, advertising agencies, and Contest Parties are not eligible, nor are persons involved in the execution or administration of this promotion, or the family members of each above (parents, children, siblings, spouse\/domestic partners, or individuals residing in the same household). Void in Cuba, Iran, North Korea, Sudan, Syria, Region of Crimea, and where prohibited. For business\/tradeshow events: If you are attending the Event in your capacity as an employee, it is your sole responsibility to comply with your employer\u2019s gift policies. Microsoft will not be party to any disputes or actions related to this matter.<\/p>\n<h3>HOW TO ENTER<\/h3>\n<p>The Contest Objective is to promote collaborative research in real-time single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech. Winners will be determined based on the speech quality of AEC models using the online subjective evaluation framework ITU-T P.831. Only models described in accepted INTERSPEECH 2021 papers will be eligible for winning the Contest. See (yet to upload the paper) for additional Contest details. You may participate as an individual or a team. If forming a team, you must designate a \u201cTeam Captain\u201d who will submit all entry materials on behalf of the team. Once you register as part of a Team, you cannot change Teams or alter your current team (either by adding or removing members) after the submission of your Entry. Limit one Entry per person and per team. You may not compete on multiple teams and you may not enter individually and on a team. We are not responsible for Entries that we do not receive for any reason, or for Entries that we receive but are not decipherable or not functional for any reason. Each Team is solely responsible for its own cooperation and teamwork. In no event will Microsoft officiate in any dispute regarding the conduct or cooperation of any Team or its members. The Contest will operate as follows: Registration \/ Development Period: January 8 \u2013 March 8, 2021. To register, please send an email to\u00a0<a href=\"mailto:aec_challenge@microsoft.com\">aec_challenge@microsoft.com<\/a> stating that you are interested to participate in the challenge. Please include the following details in your email:<\/p>\n<ul>\n<li>Names of the participants and name of the team captain<\/li>\n<li>Institution\/Company<\/li>\n<li>Email<\/li>\n<\/ul>\n<p>Then, i. develop a speech enhancement model that best meets the Contest Objective as described more fully at (yet to be uploaded) and ii. submit a paper to INTERSPEECH 2021 which reports the computational complexity of the model in terms of the number of parameters and the time it takes to infer a frame on a particular CPU (preferably Intel Core i5 quad core machine clocked at 2.4 GHz). To develop your model, use any publicly available dataset for training data, including the Contest datasets provided for training and developing models. You may augment your datasets with the Contest dataset. You can augment your data in any way that improves the performance of your model. The final evaluation will be conducted on a blind test set that is similar to the open sourced test set. Testing \/ Entry Period: January 8 \u2013 March 22, 2021. On March 8, the blind test dataset will be made available. You will have until 11:59 PM PT on March 15 to test your model against this dataset and create a set of enhanced clips to submit for judging (your \u201cEntry\u201d). The rules of the challenge are as follows:<\/p>\n<ul>\n<li>For real-time track, the AEC must take less than the stride time\u00a0<i>T_s<\/i>\u00a0(in ms) to process a frame of size\u00a0<i>T<\/i>\u00a0(in ms) on an Intel Core i5 quad-core machine clocked at 2.4 GHz or equivalent processors. For example,\u00a0<i>T_s = T\/2<\/i>\u00a0for 50% overlap between frames. The total algorithmic latency allowed including the frame size\u00a0<i>T<\/i>, stride time\u00a0<i>T_s<\/i>, and any look ahead must be\u00a0<i>\u2264<\/i>\u00a040ms. For example, for a real-time system that receives 20ms audio chunks, if you use a frame length of 20ms with a stride of 10ms resulting in an algorithmic delay of 30ms, then you satisfy the latency requirements. If you use a frame size of 32ms with a stride of 16ms resulting in an algorithmic delay of 48ms, then your method does not satisfy the latency requirements as the total algorithmic latency exceeds 40ms. If your frame size plus stride\u00a0<i>T_1=T+T_s<\/i>\u00a0is less than 40ms, then you can use up to\u00a0<i>(40-T_1)<\/i>ms future information.<\/li>\n<li>For non-real-time track, there are no constraints on computation time. To infer the current frame\u00a0<i>i<\/i>\u00a0(in ms),\u00a0the algorithm can access any number of past frames but only 40ms of future frames(<i>i<\/i>+40ms).<\/li>\n<li>The AEC can be a deep model, a traditional signal processing algorithm, or a mix of the two. There are no restrictions on the AEC aside from the run time and algorithmic delay described above.<\/li>\n<li>Submissions must follow instructions on overview.<\/li>\n<li>Winners will be picked based on the subjective echo MOS evaluated on the blind test set using ITU-T P.808 framework described in Section ref{sec:framework}.<\/li>\n<li>The blind test set will be made available to the participants on March 15, 2021. Participants must send the results (audio clips) achieved by their developed models to the organizers. We will use the submitted clips to conduct ITU-T P.808 subjective evaluation and pick the winners based on the results. Participants are forbidden from using the blind test set to retrain or tune their models. They should not submit results using other AEC methods that they are not submitting to INTERSPEECH 2021. Failing to adhere to these rules will lead to disqualification from the challenge.<\/li>\n<li>Participants should report the computational complexity of their model in terms of the number of parameters and the time it takes to infer a frame on a particular CPU (preferably Intel Core i5 quad-core machine clocked at 2.4 GHz).<\/li>\n<li>Each participating team must submit an INTERSPEECH paper that summarizes the research efforts and provide all the details to ensure reproducibility. Authors may choose to report additional objective\/subjective metrics in their paper.<\/li>\n<li>Submitted papers will undergo the standard peer-review process of INTERSPEECH 2021. The paper needs to be accepted to the conference for the participants to be eligible for the challenge.<\/li>\n<\/ul>\n<p>INTERSPEECH 2021 Paper Submission and Judging Period: March 22, 2021 \u2013 11:59 PM PT June 2, 2021 Your Entry must be described in a paper accepted by INTERSPEECH 2021. To submit a paper, visit https:\/\/www.interspeech2021.org\/. The entry limit is one per person during the Entry Period. Any attempt by any you to obtain more than the stated number of entries by using multiple\/different accounts, identities, registrations, logins, or any other methods will void your entries and you may be disqualified. Use of any automated system to participate is prohibited. We are not responsible for excess, lost, late, or incomplete entries. If disputed, entries will be deemed submitted by the \u201cauthorized account holder\u201d of the email address, social media account, or other method used to enter. The \u201cauthorized account holder\u201d is the natural person assigned to an email address by an internet or online service provider, or other organization responsible for assigning email addresses.<\/p>\n<h3>ELIGIBLE ENTRY<\/h3>\n<p>To be eligible, an entry must meet the following content\/technical requirements:<\/p>\n<ul>\n<li>Your Entry must be the method described in a paper accepted by INTERSPEECH 2021.<\/li>\n<li>Your entry must be your own original work; and<\/li>\n<li>Your entry cannot have been selected as a winner in any other contest; and<\/li>\n<li>You must have obtained any and all consents, approvals, or licenses required for you to submit your entry; and<\/li>\n<li>To the extent that entry requires the submission of user-generated content such as software, photos, videos, music, artwork, essays, etc., entrants warrant that their entry is their original work, has not been copied from others without permission or apparent rights, and does not violate the privacy, intellectual property rights, or other rights of any other person or entity. You may include Microsoft trademarks, logos, and designs, for which Microsoft grants you a limited license to use for the sole purposes of submitting an entry into this Contest; and<\/li>\n<li>Your entry may NOT contain, as determined by us in our sole and absolute discretion, any content that is obscene or offensive, violent, defamatory, disparaging or illegal, or that promotes alcohol, illegal drugs, tobacco or a particular political agenda, or that communicates messages that may reflect negatively on the goodwill of Microsoft.<\/li>\n<li>Your entry must NOT include enhanced clips using other AEC methods that you are not submitting to INTERSPEECH 2021.<\/li>\n<\/ul>\n<h3>USE OF ENTRIES<\/h3>\n<p>We are not claiming ownership rights to your Submission. However, by submitting an entry, you grant us an irrevocable, royalty-free, worldwide right and license to use, review, assess, test and otherwise analyze your entry and all its content in connection with this Contest and use your entry in any media whatsoever now known or later invented for any non-commercial or commercial purpose, including, but not limited to, the marketing, sale or promotion of Microsoft products or services, without further permission from you. You will not receive any compensation or credit for use of your entry, other than what is described in these Official Rules. By entering you acknowledge that the we may have developed or commissioned materials similar or identical to your entry and you waive any claims resulting from any similarities to your entry. Further you understand that we will not restrict work assignments of representatives who have had access to your entry and you agree that use of information in our representatives\u2019 unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law. Your entry may be posted on a public website. We are not responsible for any unauthorized use of your entry by visitors to this website. We are not obligated to use your entry for any purpose, even if it has been selected as a winning entry.<\/p>\n<h3>WINNER SELECTION AND NOTIFICATION<\/h3>\n<p>Pending confirmation of eligibility, potential winners will be selected by Microsoft or their Agent or a qualified judging panel from among all eligible entries received based on the following judging criteria: 99% \u2013 The subjective speech quality evaluated on the blind test set using ITU-T P.808 framework. We will use the submitted clips with no alteration to conduct ITU-T P.808 subjective evaluation and pick the winners based on the results.\u00a0 See for additional Contest details. 1% \u2013 The Entry was described in an accepted INTERSPEECH 2021 paper. Winners will be selected within 7 days following the event. Winners will be notified within 7 days following the Event. In the event of a tie between any eligible entries, an additional judge will break the tie based on the judging criteria described above. The decisions of the judges are final and binding. If public vote determines winners, it is prohibited for any person to obtain votes by any fraudulent or inappropriate means, including offering prizes or other inducements in exchange for votes, automated programs or fraudulent ID\u2019s. Microsoft will void any questionable votes.<\/p>\n<h3>ODDS<\/h3>\n<p>The odds of winning are based on the number and quality of eligible entries received.<\/p>\n<h3>GENERAL CONDITIONS AND RELEASE OF LIABILITY<\/h3>\n<p>To the extent allowed by law, by entering you agree to release and hold harmless Microsoft and its respective parents, partners, subsidiaries, affiliates, employees, and agents from any and all liability or any injury, loss, or damage of any kind arising in connection with this. All local laws apply. The decisions of Microsoft are final and binding. We reserve the right to cancel, change, or suspend this Contest for any reason, including cheating, technology failure, catastrophe, war, or any other unforeseen or unexpected event that affects the integrity of this Contest, whether human or mechanical. If the integrity of the Contest cannot be restored, we may select winners from among all eligible entries received before we had to cancel, change or suspend the Contest. If you attempt or we have strong reason to believe that you have compromised the integrity or the legitimate operation of this Contest by cheating, hacking, creating a bot or other automated program, or by committing fraud in any way, we may seek damages from you to the full extent of the law and you may be banned from participation in future Microsoft promotions.<\/p>\n<h3>GOVERNING LAW<\/h3>\n<p>This Contest will be governed by the laws of the State of Washington, and you consent to the exclusive jurisdiction and venue of the courts of the State of Washington for any disputes arising out of this Contest.<\/p>\n<h3>PRIVACY<\/h3>\n<p>At Microsoft, we are committed to protecting your privacy. Microsoft uses the information you provide on this form to notify you of important information about our products, upgrades and enhancements, and to send you information about other Microsoft products and services. Microsoft will not share the information you provide with third parties without your permission except where necessary to complete the services or transactions you have requested, or as required by law. Microsoft is committed to protecting the security of your personal information. We use a variety of security technologies and procedures to help protect your personal information from unauthorized access, use, or disclosure. Your personal information is never shared outside the company without your permission, except under conditions explained above. If you believe that Microsoft has not adhered to this statement, please contact Microsoft by sending an email to\u00a0<a href=\"mailto:privrc@microsoft.com\">privrc@microsoft.com<\/a>\u00a0or postal mail to Microsoft Privacy Response Center, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052<\/p>\n\n\n\n\n\n<h2>Timeline<\/h2>\n<p>This challenge is to benchmark the performance of real-time algorithms with a real (not simulated) test set. Participants will evaluate their acoustic echo canceller on a test set and submit the results (audio clips) for evaluation. The requirements for each acoustic echo canceller used for submission are:<\/p>\n<ul>\n<li><b>January 8, 2021<\/b><strong>:<\/strong> Release of the datasets.<\/li>\n<li><b>March 8, 2021<\/b><strong>:<\/strong> Blind test set released to participants.<\/li>\n<li><b>March 15, 2021<\/b><strong>:<\/strong> Deadline for participants to submit their results for objective and P.808 subjective evaluation on the blind test set.<\/li>\n<li><b>March 22, 2021<\/b><strong>:<\/strong> Organizers will notify the participants about the results.<\/li>\n<li><b>March 26, 2021<\/b><strong>:<\/strong> Regular paper submission deadline for INTERSPEECH 2021.<\/li>\n<li><b>June 2, 2021<\/b><strong>:<\/strong> Paper acceptance\/rejection notification.<\/li>\n<li><b>June 4, 2021<\/b><strong>:<\/strong> Notification of the winners.<\/li>\n<\/ul>\n\n\n\n\n\n<h2>Organizers<\/h2>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rcutler\/\">Ross Cutler<\/a>, Microsoft Corp, USA<br \/>\nAndo Saabas, Microsoft Corp, Estonia<br \/>\nTanel P\u00e4rnamaa, Microsoft Corp, Estonia<br \/>\nMarkus Loide, Microsoft Corp, Estonia<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/raichner\/\">Robert Aichner<\/a>, Microsoft Corp, USA<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sebraun\/\">Sebastian Braun<\/a>, Microsoft Research, Germany<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hagamper\/\">Hannes Gamper<\/a>, Microsoft Research, USA<br \/>\nSriram Srinivasan, Microsoft Corp, USA<br \/>\nKarsten Sorensen, Microsoft Corp, USA<\/p>\n\n\n\n\n\n<h2>Related links<\/h2>\n<ul>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/microsoft\/AEC-Challenge\" target=\"_blank\" rel=\"noopener noreferrer\">Training and test datasets<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/pdf\/2009.04972.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Acoustic Echo Cancellation Challenge: Datasets and Testing Framework<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (ICASSP 2021, paper)<\/li>\n<\/ul>\n<h3>Other challenges<\/h3>\n<ul>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-interspeech-2020\/\">Deep Noise Suppression Challenge &#8211; INTERSPEECH 2020<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/acoustic-echo-cancellation-challenge-icassp-2021\/\">Acoustic Echo Cancellation Challenge &#8211; ICASSP 2021<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-icassp-2021\/\">Deep Noise Suppression Challenge &#8211; ICASSP 2021<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/acoustic-echo-cancellation-challenge-icassp-2022\/\">Acoustic Echo Cancellation Challenge &#8211; ICASSP 2022<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-icassp-2022\/\">Deep Noise Suppression Challenge &#8211; ICASSP 2022<\/a><\/li>\n<\/ul>\n\n\n\n\n\n<h3 class=\"is-layout-constrained wp-block-group-is-layout-constrained\">Final results<\/h3>\n<p style=\"text-align: left\">The below table gives the accepted papers in order of performance rank. Note that the paper acceptance is not well correlated to performance. Paper acceptance was decided with the normal INTERSPEECH peer-review process.<\/p>\n<div style=\"direction: ltr\">\n<table style=\"direction: ltr;border-collapse: separate;border: 1pt solid #a3a3a3;border-spacing: 0px\" title=\"\" border=\"1\" summary=\"\" cellspacing=\"0\" cellpadding=\"0\">\n<thead>\n<tr style=\"height: 54px\">\n<th style=\"background-color: #0070c0;vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Place<\/p>\n<\/th>\n<th style=\"background-color: #0070c0;vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Performance Rank<\/p>\n<\/th>\n<th style=\"background-color: #0070c0;vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: white\">Team<\/p>\n<\/th>\n<th style=\"background-color: #0070c0;vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Authors<\/p>\n<\/th>\n<th style=\"background-color: #0070c0;vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: white\">Title<\/p>\n<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"height: 81px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 81px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">1<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 81px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">6<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 81px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Chinese Academy of Sciences<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 81px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Renhua Peng, Linjuan Cheng, Chengshi Zheng and Xiaodong Li<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 81px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/peng21f_interspeech.pdf\">Acoustic Echo Cancellation using Deep Complex Neural Network with Nonlinear Magnitude Compression and Phase Information<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 88px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">2<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">8<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: NimbusRomNo9L;font-size: 12.0pt\">Northwestern Polytechnical University<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Shimin Zhang, Yuxiang Kong, Shubo lv, Yanxin Hu and Lei Xie<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.07577.pdf\">F-T-LSTM based Complex Network for Joint Acoustic\u00a0Echo\u00a0Cancellation\u00a0and Speech Enhancement<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 54px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">3<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">10<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Evolve Technologies<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Lukas Pfeifenberger, Matthias Z\u00f6hrer and Franz Pernkopf<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/pfeifenberger21_interspeech.pdf\">Acoustic\u00a0Echo\u00a0Cancellation\u00a0with Cross-Domain Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 88px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">4<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">12<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: NimbusRomNo9L;font-size: 12.0pt\">Technische Universita \u0308t Braunschweig<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Ernst Seidel, Jan Franzen, Maximilian Strake and Tim Fingscheidt<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 88px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2103.17189.pdf\">Y^2-Net FCRN for Acoustic\u00a0Echo\u00a0and Noise Suppression<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 54px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">5<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">14<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Technion<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Amir Ivry, Israel Cohen, Baruch Berdugo<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 54px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.13754.pdf\">Nonlinear Acoustic Echo Cancellation with Deep Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 133px\">\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid;height: 133px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">NA<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid;height: 133px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">NA<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid;height: 133px\">\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: #070706\">NA<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid;height: 133px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Ross Cutler, Ando Saabas, Tanel Parnamaa, Markus Loide, Sten Sootla, Marju Purin, Hannes Gamper, Sebastian Braun, Karsten Sorensen, Robert Aichner and Sriram Srinivasan<\/p>\n<\/td>\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid;height: 133px\">\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/cutler21_interspeech.pdf\">INTERSPEECH\u00a02021 Acoustic\u00a0Echo\u00a0Cancellation Challenge<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p><\/p>\n<h3>Challenge submissions<\/h3>\n<table style=\"border-spacing: inherit;collapse;width: 861px;border: 1px solid black\" width=\"783\">\n<thead>\n\n<tr style=\"height: 47px\">\n<th style=\"border: 1px solid black;padding: 5px\" width=\"21\">Id<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"235\">Team<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"73\">ST NE MOS<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"112\">ST FE Echo DMOS<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"97\">DT Echo DMOS<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"103\">DT Other DMOS<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"79\">mean<\/th>\n<th style=\"border: 1px solid black;padding: 5px\" width=\"63\">CI<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">4<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">ERCESI<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.25<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.59<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.69<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.18<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.43<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">2<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Trident<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.27<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.49<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.52<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.39<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.42<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">7<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Kuaishou<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.10<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.54<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.77<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.24<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.41<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">8<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">ByteDance SAMI<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.32<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.45<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.59<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.28<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.41<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">14<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Alibaba Group<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.19<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.49<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.58<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.27<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.38<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">13<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Chinese Academy of Science<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.26<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.34<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.36<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.23<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.30<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">5<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Bytedance<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.23<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.49<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.31<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.15<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.29<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr style=\"height: 46px\">\n<td style=\"border: 1px solid black;padding: 5px\">9<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Northwestern Polytechnical University<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.78<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.44<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.44<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.90<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.14<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">11<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">proactivaudio GmbH<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.13<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.12<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.18<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.04<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.12<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">3<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Evolve Technologies<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.01<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.52<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.90<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.72<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.04<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">15<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Baseline<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.18<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.82<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.04<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.45<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.87<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.02<\/td>\n<\/tr>\n<tr style=\"height: 46px\">\n<td style=\"border: 1px solid black;padding: 5px\">10<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Technische Universit\u00e4t Braunschweig<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.16<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.73<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.72<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.53<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.78<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.03<\/td>\n<\/tr>\n\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">12<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Merry Electronics<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.29<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.83<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">4.21<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">2.92<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.56<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.03<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1px solid black;padding: 5px\">6<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Technion<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">2.73<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">2.50<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.53<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.40<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.04<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.03<\/td>\n<\/tr>\n<tr style=\"height: 47px\">\n<td style=\"border: 1px solid black;padding: 5px\">15<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">Universitat Polit\u00e8cncia de Val\u00e8ncia<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">2.25<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.37<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">3.76<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">1.92<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">2.82<\/td>\n<td style=\"border: 1px solid black;padding: 5px\">0.03<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><!--img class=\"alignnone wp-image-735349 size-full\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results.png\" sizes=\"(max-width: 874px) 100vw, 874px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results.png 1747w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results-300x105.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results-1024x358.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results-768x269.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results-1536x537.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results-16x6.png 16w\" alt=\"Challenge submissions table\" width=\"1747\" height=\"611\" \/--><\/p>\n<h3>Anova table of top entries<\/h3>\n\n<p><\/p>\n\n<table style=\"border-spacing: inherit;border-collapse: collapse\" width=\"620\">\n\n<thead>\n<tr>\n<th style=\"border:1px solid black;padding:5px\">Team<\/th>\n<th style=\"border:1px solid black;padding:5px\">ERCESI<\/th>\n<th style=\"border:1px solid black;padding:5px\">Trident<\/th>\n<th style=\"border:1px solid black;padding:5px\">Kuaishou<\/th>\n<th style=\"border:1px solid black;padding:5px\">ByteDance SAMI<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"border:1px solid black;padding:5px\">ERCESI<\/td>\n<td style=\"border:1px solid black;padding:5px\">1<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.81<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.73<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.64<\/td>\n<\/tr>\n<tr>\n<td style=\"border:1px solid black;padding:5px\">Trident<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.81<\/td>\n<td style=\"border:1px solid black;padding:5px\">1<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.91<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.82<\/td>\n<\/tr>\n<tr>\n<td style=\"border:1px solid black;padding:5px\">Kuaishou<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.73<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.91<\/td>\n<td style=\"border:1px solid black;padding:5px\">1<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.91<\/td>\n<\/tr>\n<tr>\n<td style=\"border:1px solid black;padding:5px\">ByteDance SAMI<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.64<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.82<\/td>\n<td style=\"border:1px solid black;padding:5px\">0.91<\/td>\n<td style=\"border:1px solid black;padding:5px\">1<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><!--img class=\"alignnone size-full wp-image-735346\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova.png\" sizes=\"(max-width: 418px) 100vw, 418px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova.png 837w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova-300x69.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova-768x176.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova-16x4.png 16w\" alt=\"table\" width=\"418\" height=\"96\" \/--><\/p>\n<h3>Legend<\/h3>\n<p>ST NE MOS: P.808 MOS of nearend singletalk scenario<br \/>ST FE Echo MOS: P.831 Echo DMOS for farend singletalk<br \/>DT Echo DMOS: P.831 Echo DMOS for doubletalk scenario<br \/>DT Other DMOS: P.831 other degradations DMOS of doubletalk scenario<\/p>\n\n\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":null,"footnotes":""},"msr-opportunity-type":[187426],"msr-region":[256048],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-716434","msr-academic-program","type-msr-academic-program","status-publish","hentry","msr-opportunity-type-challenges","msr-region-global","msr-locale-en_us"],"msr_description":"The\u00a0INTERSPEECH\u00a0Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of\u00a0acoustic echo cancellation\u00a0(AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems.","msr_social_media":[],"related-researchers":[{"type":"user_nicename","display_name":"Robert Aichner","user_id":39781,"people_section":"Section name 0","alias":"raichner"},{"type":"user_nicename","display_name":"Sebastian Braun","user_id":37688,"people_section":"Section name 0","alias":"sebraun"},{"type":"user_nicename","display_name":"Ross Cutler","user_id":40660,"people_section":"Section name 0","alias":"rcutler"},{"type":"user_nicename","display_name":"Hannes Gamper","user_id":31943,"people_section":"Section name 0","alias":"hagamper"}],"tab-content":[{"id":0,"name":"About","content":"<strong>Program dates:<\/strong> January 2021 - June 2021\r\n\r\nThe\u00a0<a href=\"https:\/\/www.interspeech2021.org\/\" target=\"_blank\" rel=\"noopener\">INTERSPEECH 2021<\/a>\u00a0Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of acoustic echo cancellation (AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems. Many recent AEC studies report good performance on synthetic datasets where the training and testing data come from the same underlying distribution. However, the AEC performance often degrades significantly on real recordings. Also, most of the conventional objective metrics such as echo return loss enhancement (ERLE) and perceptual evaluation of speech quality (PESQ) do not correlate well with subjective speech quality tests in the presence of background noise and reverberation found in realistic environments.\r\n\r\nIn this challenge, we open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 5,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We also open source an online subjective test framework and provide an online objective metric service for researchers to quickly test their results. The winners of this challenge will be selected based on the average Mean Opinion Score (MOS) achieved across all different single talk and double talk scenarios.\r\n\r\n&nbsp;\r\n<h3>Submission instructions<\/h3>\r\nPlease use\u00a0<a href=\"https:\/\/cmt3.research.microsoft.com\/AECCINTERSPEECH2021\" target=\"_blank\" rel=\"noopener\">Microsoft Conference Management Toolkit\u00a0<\/a>for submitting the results. After logging in, complete the following steps to submit the results:\r\n<ol>\r\n \t<li>Choose \"Create new submission\" in the Author Console.<\/li>\r\n \t<li>Enter title, abstract and co-authors, and upload a\u00a0<i>lastname<\/i>.txt file (can be empty or contain additional information regarding the submission).<\/li>\r\n \t<li>Compress the enhanced results files to a single\u00a0<i>lastname<\/i>.zip file, retaining the same folder and file names as the blind test set (max file size: 350 MB).<\/li>\r\n \t<li><u>After creating the submission<\/u>, return to the \"Author Console\" (by clicking on \"Submissions\" at the top of the page) and upload the\u00a0<i>lastname<\/i>.zip file via \"Upload Supplementary Material\".<\/li>\r\n<\/ol>\r\n<strong>Submission deadline:<\/strong> March 15, 2021, 11:59pm (anywhere on Earth)\r\n\r\n<strong>Contact us:<\/strong> For questions, please contact <a href=\"mailto:aec_challenge@microsoft.com\">aec_challenge@microsoft.com<\/a>"},{"id":1,"name":"Rules","content":"<h2><b>Official Rules<\/b><\/h2>\r\n<h3>SPONSOR<\/h3>\r\nThese Official Rules (\u201cRules\u201d) govern the operation of the Microsoft INTERSPEECH 2021 AEC (see overview) Event Contest (\u201cContest\u201d). Microsoft Corporation, One Microsoft Way, Redmond, WA, 98052, USA, is the Contest sponsor (\u201cSponsor\u201d).\r\n<h3>DEFINITIONS<\/h3>\r\nIn these Rules, \u201cMicrosoft\u201d, \u201cwe\u201d, \u201cour\u201d, and \u201cus\u201d, refer to Sponsor and \u201cyou\u201d and \u201cyourself\u201d refers to a Contest participant, or the parent\/legal guardian of any Contest entrant who has not reached the age of majority to contractually obligate themselves in their legal place of residence. \u201cEvent\u201d refers to the INTERSPEECH 2021 AEC (see overview) event held in Toronto, Canada (the \u201cEvent\u201d). By entering you (your parent\/legal guardian if you are not the age of majority in your legal place of residence) agree to be bound by these Rules.\r\n<h3>ENTRY PERIOD<\/h3>\r\nThe Contest will operate from January 8, 2021 to March 22, 2021 (\u201cEntry Period\u201d). The Entry Period is divided into several periods as described in section How to Enter.\r\n<h3>ELIGIBILITY<\/h3>\r\nOpen to any registered Event attendee 18 years of age or older. If you are 18 years of age or older but have not reached the age of majority in your legal place of residence, then you must have consent of a parent\/legal guardian. Employees and directors of Microsoft Corporation and its subsidiaries, affiliates, advertising agencies, and Contest Parties are not eligible, nor are persons involved in the execution or administration of this promotion, or the family members of each above (parents, children, siblings, spouse\/domestic partners, or individuals residing in the same household). Void in Cuba, Iran, North Korea, Sudan, Syria, Region of Crimea, and where prohibited. For business\/tradeshow events: If you are attending the Event in your capacity as an employee, it is your sole responsibility to comply with your employer\u2019s gift policies. Microsoft will not be party to any disputes or actions related to this matter.\r\n<h3>HOW TO ENTER<\/h3>\r\nThe Contest Objective is to promote collaborative research in real-time single-channel Speech Enhancement aimed to maximize the subjective (perceptual) quality of the enhanced speech. Winners will be determined based on the speech quality of AEC models using the online subjective evaluation framework ITU-T P.831. Only models described in accepted INTERSPEECH 2021 papers will be eligible for winning the Contest. See (yet to upload the paper) for additional Contest details. You may participate as an individual or a team. If forming a team, you must designate a \u201cTeam Captain\u201d who will submit all entry materials on behalf of the team. Once you register as part of a Team, you cannot change Teams or alter your current team (either by adding or removing members) after the submission of your Entry. Limit one Entry per person and per team. You may not compete on multiple teams and you may not enter individually and on a team. We are not responsible for Entries that we do not receive for any reason, or for Entries that we receive but are not decipherable or not functional for any reason. Each Team is solely responsible for its own cooperation and teamwork. In no event will Microsoft officiate in any dispute regarding the conduct or cooperation of any Team or its members. The Contest will operate as follows: Registration \/ Development Period: January 8 \u2013 March 8, 2021. To register, please send an email to\u00a0<a href=\"mailto:aec_challenge@microsoft.com\">aec_challenge@microsoft.com<\/a> stating that you are interested to participate in the challenge. Please include the following details in your email:\r\n<ul>\r\n \t<li>Names of the participants and name of the team captain<\/li>\r\n \t<li>Institution\/Company<\/li>\r\n \t<li>Email<\/li>\r\n<\/ul>\r\nThen, i. develop a speech enhancement model that best meets the Contest Objective as described more fully at (yet to be uploaded) and ii. submit a paper to INTERSPEECH 2021 which reports the computational complexity of the model in terms of the number of parameters and the time it takes to infer a frame on a particular CPU (preferably Intel Core i5 quad core machine clocked at 2.4 GHz). To develop your model, use any publicly available dataset for training data, including the Contest datasets provided for training and developing models. You may augment your datasets with the Contest dataset. You can augment your data in any way that improves the performance of your model. The final evaluation will be conducted on a blind test set that is similar to the open sourced test set. Testing \/ Entry Period: January 8 \u2013 March 22, 2021. On March 8, the blind test dataset will be made available. You will have until 11:59 PM PT on March 15 to test your model against this dataset and create a set of enhanced clips to submit for judging (your \u201cEntry\u201d). The rules of the challenge are as follows:\r\n<ul>\r\n \t<li>For real-time track, the AEC must take less than the stride time\u00a0<i>T_s<\/i>\u00a0(in ms) to process a frame of size\u00a0<i>T<\/i>\u00a0(in ms) on an Intel Core i5 quad-core machine clocked at 2.4 GHz or equivalent processors. For example,\u00a0<i>T_s = T\/2<\/i>\u00a0for 50% overlap between frames. The total algorithmic latency allowed including the frame size\u00a0<i>T<\/i>, stride time\u00a0<i>T_s<\/i>, and any look ahead must be\u00a0<i>\u2264<\/i>\u00a040ms. For example, for a real-time system that receives 20ms audio chunks, if you use a frame length of 20ms with a stride of 10ms resulting in an algorithmic delay of 30ms, then you satisfy the latency requirements. If you use a frame size of 32ms with a stride of 16ms resulting in an algorithmic delay of 48ms, then your method does not satisfy the latency requirements as the total algorithmic latency exceeds 40ms. If your frame size plus stride\u00a0<i>T_1=T+T_s<\/i>\u00a0is less than 40ms, then you can use up to\u00a0<i>(40-T_1)<\/i>ms future information.<\/li>\r\n \t<li>For non-real-time track, there are no constraints on computation time. To infer the current frame\u00a0<i>i<\/i>\u00a0(in ms),\u00a0the algorithm can access any number of past frames but only 40ms of future frames(<i>i<\/i>+40ms).<\/li>\r\n \t<li>The AEC can be a deep model, a traditional signal processing algorithm, or a mix of the two. There are no restrictions on the AEC aside from the run time and algorithmic delay described above.<\/li>\r\n \t<li>Submissions must follow instructions on overview.<\/li>\r\n \t<li>Winners will be picked based on the subjective echo MOS evaluated on the blind test set using ITU-T P.808 framework described in Section \\ref{sec:framework}.<\/li>\r\n \t<li>The blind test set will be made available to the participants on March 15, 2021. Participants must send the results (audio clips) achieved by their developed models to the organizers. We will use the submitted clips to conduct ITU-T P.808 subjective evaluation and pick the winners based on the results. Participants are forbidden from using the blind test set to retrain or tune their models. They should not submit results using other AEC methods that they are not submitting to INTERSPEECH 2021. Failing to adhere to these rules will lead to disqualification from the challenge.<\/li>\r\n \t<li>Participants should report the computational complexity of their model in terms of the number of parameters and the time it takes to infer a frame on a particular CPU (preferably Intel Core i5 quad-core machine clocked at 2.4 GHz).<\/li>\r\n \t<li>Each participating team must submit an INTERSPEECH paper that summarizes the research efforts and provide all the details to ensure reproducibility. Authors may choose to report additional objective\/subjective metrics in their paper.<\/li>\r\n \t<li>Submitted papers will undergo the standard peer-review process of INTERSPEECH 2021. The paper needs to be accepted to the conference for the participants to be eligible for the challenge.<\/li>\r\n<\/ul>\r\nINTERSPEECH 2021 Paper Submission and Judging Period: March 22, 2021 \u2013 11:59 PM PT June 2, 2021 Your Entry must be described in a paper accepted by INTERSPEECH 2021. To submit a paper, visit https:\/\/www.interspeech2021.org\/. The entry limit is one per person during the Entry Period. Any attempt by any you to obtain more than the stated number of entries by using multiple\/different accounts, identities, registrations, logins, or any other methods will void your entries and you may be disqualified. Use of any automated system to participate is prohibited. We are not responsible for excess, lost, late, or incomplete entries. If disputed, entries will be deemed submitted by the \u201cauthorized account holder\u201d of the email address, social media account, or other method used to enter. The \u201cauthorized account holder\u201d is the natural person assigned to an email address by an internet or online service provider, or other organization responsible for assigning email addresses.\r\n<h3>ELIGIBLE ENTRY<\/h3>\r\nTo be eligible, an entry must meet the following content\/technical requirements:\r\n<ul>\r\n \t<li>Your Entry must be the method described in a paper accepted by INTERSPEECH 2021.<\/li>\r\n \t<li>Your entry must be your own original work; and<\/li>\r\n \t<li>Your entry cannot have been selected as a winner in any other contest; and<\/li>\r\n \t<li>You must have obtained any and all consents, approvals, or licenses required for you to submit your entry; and<\/li>\r\n \t<li>To the extent that entry requires the submission of user-generated content such as software, photos, videos, music, artwork, essays, etc., entrants warrant that their entry is their original work, has not been copied from others without permission or apparent rights, and does not violate the privacy, intellectual property rights, or other rights of any other person or entity. You may include Microsoft trademarks, logos, and designs, for which Microsoft grants you a limited license to use for the sole purposes of submitting an entry into this Contest; and<\/li>\r\n \t<li>Your entry may NOT contain, as determined by us in our sole and absolute discretion, any content that is obscene or offensive, violent, defamatory, disparaging or illegal, or that promotes alcohol, illegal drugs, tobacco or a particular political agenda, or that communicates messages that may reflect negatively on the goodwill of Microsoft.<\/li>\r\n \t<li>Your entry must NOT include enhanced clips using other AEC methods that you are not submitting to INTERSPEECH 2021.<\/li>\r\n<\/ul>\r\n<h3>USE OF ENTRIES<\/h3>\r\nWe are not claiming ownership rights to your Submission. However, by submitting an entry, you grant us an irrevocable, royalty-free, worldwide right and license to use, review, assess, test and otherwise analyze your entry and all its content in connection with this Contest and use your entry in any media whatsoever now known or later invented for any non-commercial or commercial purpose, including, but not limited to, the marketing, sale or promotion of Microsoft products or services, without further permission from you. You will not receive any compensation or credit for use of your entry, other than what is described in these Official Rules. By entering you acknowledge that the we may have developed or commissioned materials similar or identical to your entry and you waive any claims resulting from any similarities to your entry. Further you understand that we will not restrict work assignments of representatives who have had access to your entry and you agree that use of information in our representatives\u2019 unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law. Your entry may be posted on a public website. We are not responsible for any unauthorized use of your entry by visitors to this website. We are not obligated to use your entry for any purpose, even if it has been selected as a winning entry.\r\n<h3>WINNER SELECTION AND NOTIFICATION<\/h3>\r\nPending confirmation of eligibility, potential winners will be selected by Microsoft or their Agent or a qualified judging panel from among all eligible entries received based on the following judging criteria: 99% \u2013 The subjective speech quality evaluated on the blind test set using ITU-T P.808 framework. We will use the submitted clips with no alteration to conduct ITU-T P.808 subjective evaluation and pick the winners based on the results.\u00a0 See for additional Contest details. 1% \u2013 The Entry was described in an accepted INTERSPEECH 2021 paper. Winners will be selected within 7 days following the event. Winners will be notified within 7 days following the Event. In the event of a tie between any eligible entries, an additional judge will break the tie based on the judging criteria described above. The decisions of the judges are final and binding. If public vote determines winners, it is prohibited for any person to obtain votes by any fraudulent or inappropriate means, including offering prizes or other inducements in exchange for votes, automated programs or fraudulent ID\u2019s. Microsoft will void any questionable votes.\r\n<h3>ODDS<\/h3>\r\nThe odds of winning are based on the number and quality of eligible entries received.\r\n<h3>GENERAL CONDITIONS AND RELEASE OF LIABILITY<\/h3>\r\nTo the extent allowed by law, by entering you agree to release and hold harmless Microsoft and its respective parents, partners, subsidiaries, affiliates, employees, and agents from any and all liability or any injury, loss, or damage of any kind arising in connection with this. All local laws apply. The decisions of Microsoft are final and binding. We reserve the right to cancel, change, or suspend this Contest for any reason, including cheating, technology failure, catastrophe, war, or any other unforeseen or unexpected event that affects the integrity of this Contest, whether human or mechanical. If the integrity of the Contest cannot be restored, we may select winners from among all eligible entries received before we had to cancel, change or suspend the Contest. If you attempt or we have strong reason to believe that you have compromised the integrity or the legitimate operation of this Contest by cheating, hacking, creating a bot or other automated program, or by committing fraud in any way, we may seek damages from you to the full extent of the law and you may be banned from participation in future Microsoft promotions.\r\n<h3>GOVERNING LAW<\/h3>\r\nThis Contest will be governed by the laws of the State of Washington, and you consent to the exclusive jurisdiction and venue of the courts of the State of Washington for any disputes arising out of this Contest.\r\n<h3>PRIVACY<\/h3>\r\nAt Microsoft, we are committed to protecting your privacy. Microsoft uses the information you provide on this form to notify you of important information about our products, upgrades and enhancements, and to send you information about other Microsoft products and services. Microsoft will not share the information you provide with third parties without your permission except where necessary to complete the services or transactions you have requested, or as required by law. Microsoft is committed to protecting the security of your personal information. We use a variety of security technologies and procedures to help protect your personal information from unauthorized access, use, or disclosure. Your personal information is never shared outside the company without your permission, except under conditions explained above. If you believe that Microsoft has not adhered to this statement, please contact Microsoft by sending an email to\u00a0<a href=\"mailto:privrc@microsoft.com\">privrc@microsoft.com<\/a>\u00a0or postal mail to Microsoft Privacy Response Center, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052"},{"id":2,"name":"Timeline","content":"<h2>Timeline<\/h2>\r\nThis challenge is to benchmark the performance of real-time algorithms with a real (not simulated) test set. Participants will evaluate their acoustic echo canceller on a test set and submit the results (audio clips) for evaluation. The requirements for each acoustic echo canceller used for submission are:\r\n<ul>\r\n \t<li><b>January 8, 2021<\/b><strong>:<\/strong> Release of the datasets.<\/li>\r\n \t<li><b>March 8, 2021<\/b><strong>:<\/strong> Blind test set released to participants.<\/li>\r\n \t<li><b>March 15, 2021<\/b><strong>:<\/strong> Deadline for participants to submit their results for objective and P.808 subjective evaluation on the blind test set.<\/li>\r\n \t<li><b>March 22, 2021<\/b><strong>:<\/strong> Organizers will notify the participants about the results.<\/li>\r\n \t<li><b>March 26, 2021<\/b><strong>:<\/strong> Regular paper submission deadline for INTERSPEECH 2021.<\/li>\r\n \t<li><b>June 2, 2021<\/b><strong>:<\/strong> Paper acceptance\/rejection notification.<\/li>\r\n \t<li><b>June 4, 2021<\/b><strong>:<\/strong> Notification of the winners.<\/li>\r\n<\/ul>"},{"id":3,"name":"Organizers","content":"<h2>Organizers<\/h2>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rcutler\/\">Ross Cutler<\/a>, Microsoft Corp, USA\r\nAndo Saabas, Microsoft Corp, Estonia\r\nTanel P\u00e4rnamaa, Microsoft Corp, Estonia\r\nMarkus Loide, Microsoft Corp, Estonia\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/raichner\/\">Robert Aichner<\/a>, Microsoft Corp, USA\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sebraun\/\">Sebastian Braun<\/a>, Microsoft Research, Germany\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hagamper\/\">Hannes Gamper<\/a>, Microsoft Research, USA\r\nSriram Srinivasan, Microsoft Corp, USA\r\nKarsten Sorensen, Microsoft Corp, USA"},{"id":4,"name":"Links","content":"<h2>Related links<\/h2>\r\n<ul>\r\n \t<li><a href=\"https:\/\/github.com\/microsoft\/AEC-Challenge\" target=\"_blank\" rel=\"noopener\">Training and test datasets<\/a><\/li>\r\n \t<li><a href=\"https:\/\/arxiv.org\/pdf\/2009.04972.pdf\" target=\"_blank\" rel=\"noopener\">Acoustic Echo Cancellation Challenge: Datasets and Testing Framework<\/a> (ICASSP 2021, paper)<\/li>\r\n<\/ul>\r\n<h3>Other challenges<\/h3>\r\n<ul>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-interspeech-2020\/\">Deep Noise Suppression Challenge - INTERSPEECH 2020<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/acoustic-echo-cancellation-challenge-icassp-2021\/\">Acoustic Echo Cancellation Challenge - ICASSP 2021<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-icassp-2021\/\">Deep Noise Suppression Challenge - ICASSP 2021<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/acoustic-echo-cancellation-challenge-icassp-2022\/\">Acoustic Echo Cancellation Challenge - ICASSP 2022<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/deep-noise-suppression-challenge-icassp-2022\/\">Deep Noise Suppression Challenge - ICASSP 2022<\/a><\/li>\r\n<\/ul>"},{"id":5,"name":"Results","content":"<h3>Final results<\/h3>\r\nThe below table gives the accepted papers in order of performance rank. Note that the paper acceptance is not well correlated to performance. Paper acceptance was decided with the normal INTERSPEECH peer-review process.\r\n<div style=\"direction: ltr\">\r\n<table style=\"direction: ltr;border-collapse: separate;border: 1pt solid #a3a3a3;border-spacing: 0px\" title=\"\" border=\"1\" summary=\"\" cellspacing=\"0\" cellpadding=\"0\">\r\n<tbody>\r\n<tr>\r\n<td style=\"background-color: #0070c0;vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Place<\/p>\r\n<\/td>\r\n<td style=\"background-color: #0070c0;vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Performance Rank<\/p>\r\n<\/td>\r\n<td style=\"background-color: #0070c0;vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: white\">Team<\/p>\r\n<\/td>\r\n<td style=\"background-color: #0070c0;vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt;color: white\">Authors<\/p>\r\n<\/td>\r\n<td style=\"background-color: #0070c0;vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: white\">Title<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">1<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">6<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Chinese Academy of Sciences<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Renhua Peng, Linjuan Cheng, Chengshi Zheng and Xiaodong Li<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/peng21f_interspeech.pdf\">Acoustic Echo Cancellation using Deep Complex Neural Network with Nonlinear Magnitude Compression and Phase Information<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">2<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">8<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: NimbusRomNo9L;font-size: 12.0pt\">Northwestern Polytechnical University<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Shimin Zhang, Yuxiang Kong, Shubo lv, Yanxin Hu and Lei Xie<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a href=\"https:\/\/arxiv.org\/pdf\/2106.07577.pdf\">F-T-LSTM based Complex Network for Joint Acoustic\u00a0Echo\u00a0Cancellation\u00a0and Speech Enhancement<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">3<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">10<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Evolve Technologies<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Lukas Pfeifenberger, Matthias Z\u00f6hrer and Franz Pernkopf<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/pfeifenberger21_interspeech.pdf\">Acoustic\u00a0Echo\u00a0Cancellation\u00a0with Cross-Domain Learning<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">4<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">12<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: NimbusRomNo9L;font-size: 12.0pt\">Technische Universita \u0308t Braunschweig<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Ernst Seidel, Jan Franzen, Maximilian Strake and Tim Fingscheidt<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a href=\"https:\/\/arxiv.org\/pdf\/2103.17189.pdf\">Y^2-Net FCRN for Acoustic\u00a0Echo\u00a0and Noise Suppression<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">5<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">14<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\">Technion<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Amir Ivry, Israel Cohen, Baruch Berdugo<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt\"><a href=\"https:\/\/arxiv.org\/pdf\/2106.13754.pdf\">Nonlinear Acoustic Echo Cancellation with Deep Learning<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"vertical-align: top;width: 64px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">NA<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 101px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">NA<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 130px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: inherit;font-size: 11.0pt;color: #070706\">NA<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 274px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\">Ross Cutler, Ando Saabas, Tanel Parnamaa, Markus Loide, Sten Sootla, Marju Purin, Hannes Gamper, Sebastian Braun, Karsten Sorensen, Robert Aichner and Sriram Srinivasan<\/p>\r\n<\/td>\r\n<td style=\"vertical-align: top;width: 323px;padding: 0px;border: 1px solid\">\r\n<p style=\"margin: 0in;font-family: Calibri;font-size: 11.0pt\"><a href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/cutler21_interspeech.pdf\">INTERSPEECH\u00a02021 Acoustic\u00a0Echo\u00a0Cancellation Challenge<\/a><\/p>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/div>\r\nChallenge submissions\r\n\r\n<img class=\"alignnone size-full wp-image-735349\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/results.png\" alt=\"\" width=\"874\" height=\"306\" \/>\r\n<h4>Anova table of top entries<\/h4>\r\n<img class=\"alignnone size-full wp-image-735346\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/01\/anova.png\" alt=\"table\" width=\"418\" height=\"96\" \/>\r\n\r\n&nbsp;\r\n<h3>Legend<\/h3>\r\nST NE MOS: P.808 MOS of nearend singletalk scenario\r\nST FE Echo MOS: P.831 Echo DMOS for farend singletalk\r\nDT Echo DMOS: P.831 Echo DMOS for doubletalk scenario\r\nDT Other DMOS: P.831 other degradations DMOS of doubletalk scenario"}],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/716434","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-academic-program"}],"version-history":[{"count":26,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/716434\/revisions"}],"predecessor-version":[{"id":1137284,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/716434\/revisions\/1137284"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=716434"}],"wp:term":[{"taxonomy":"msr-opportunity-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-opportunity-type?post=716434"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=716434"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=716434"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=716434"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=716434"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=716434"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}