{"id":605979,"date":"2019-09-03T17:48:28","date_gmt":"2019-09-04T00:48:28","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=605979"},"modified":"2025-08-06T11:56:17","modified_gmt":"2025-08-06T18:56:17","slug":"interspeech-2019","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/interspeech-2019\/","title":{"rendered":"Microsoft at Interspeech 2019"},"content":{"rendered":"\n\n<p><strong>Venue:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.interspeech2019.org\/venue_and_travel\/conference-venue\/\" target=\"_blank\" rel=\"noopener noreferrer\">Messecongress Graz<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.interspeech2019.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Interspeech 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Interspeech is the world\u2018s largest and most comprehensive conference on the science and technology of spoken language processing. Microsoft joins the conference as a proud gold sponsor. Stop by our booth to chat with our experts, see demos of our latest research and find out about <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs?rt=professional\" target=\"_blank\" rel=\"noopener\">career opportunities<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0with Microsoft.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h3>Monday, September 16<\/h3>\n<p>15:30-15:50 | Hall 1 | Oral<br \/>\n<strong>Speaker Adaptation for Attention-Based End-to-End Speech Recognition<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>14:30-16:30 | Gallery C | Poster<br \/>\n<strong>Zero Shot Intent Classification Using Long-Short Term Memory Networks<\/strong><strong><br \/>\n<\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kywillia\/\">Kyle Williams<\/a><\/p>\n<p>14:30 \u2013 16:30 | Hall 4 | Show & Tell<br \/>\n<strong>Speech Based Web Navigation for Movement Impaired Users<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/vasiliy-radostev-063947\/\" target=\"_blank\" rel=\"noopener\">Vasiliy Radostev<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/serge-berger-0933bb25\/\" target=\"_blank\" rel=\"noopener\">Serge Berger<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/justin-sina-tabrizi-a4a86851\/\" target=\"_blank\" rel=\"noopener\">Justin Tabrizi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/pavel-pasha-kamyshev-6685b227\/\" target=\"_blank\" rel=\"noopener\">Pasha Kamyshev<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/hisami-suzuki-23547376\/\" target=\"_blank\" rel=\"noopener\">Hisami Suzuki<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h3>Tuesday, September 17<\/h3>\n<p>10:00-12:00 | Hall 10\/E | Poster<br \/>\n<strong>A Scalable Noisy Speech Dataset and Online Subjective Test Framework<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/ebrahim-beyrami-25150558\/\" target=\"_blank\" rel=\"noopener\">Ebrahim Beyrami<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jamie-pool\/\" target=\"_blank\" rel=\"noopener\">Jamie Pool<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/sriramsrinivasan\/\" target=\"_blank\" rel=\"noopener\">Sriram Srinivasan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke <\/a><\/p>\n<p>13:30-15:30 | Hall 10\/E | Poster<br \/>\n<strong>Speech Signal Characterization 3\/Vocal Pitch Extraction in Polyphonic Music using Convolutional Residual Network<\/strong><br \/>\nMingye Dong<em>, <\/em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jiewubj\/\" target=\"_blank\" rel=\"noopener\">Jie Wu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jian-luan-58b5a428\/\" target=\"_blank\" rel=\"noopener\">Jian Luan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>13:30-13:50 | Hall 1 | Oral<br \/>\n<strong>Forward-Backward Decoding for Regularizing End-to-End TTS<\/strong><br \/>\nYibin Zheng, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/xi-wang-502b2029\/\" target=\"_blank\" rel=\"noopener\">Xi Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/shifeng-pan-32155638\/\" target=\"_blank\" rel=\"noopener\">Shifeng Pan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, Zhengqi Wen, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/dblp.org\/pers\/hd\/t\/Tao:Jianhua\" target=\"_blank\" rel=\"noopener\">Jianhua Tao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>13:50-14:10 | Hall 2 | Oral<br \/>\n<strong>A New GAN-based End-to-End TTS Training Algorithm<\/strong><strong>\u00a0<\/strong><br \/>\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Lei Xie<\/p>\n<p>14:10-14:30 | Hall 2 | Oral<br \/>\n<strong>Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTS<\/strong><strong><br \/>\n<\/strong>Mutian He, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yan-deng-41157535\/\" target=\"_blank\" rel=\"noopener\">Yan Deng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>16:00-18:00 | Gallery A | Poster<br \/>\n<strong>Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion<\/strong><strong>\u00a0 <\/strong><br \/>\nHao Sun, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuta\/\">Xu Tan<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/%E4%BF%8A%E4%BC%9F-%E5%B9%B2-9b9b00131\/\" target=\"_blank\" rel=\"noopener\">Jun-Wei Gan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Hongzhi Liu, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/sheng-zhao-83689129\/\" target=\"_blank\" rel=\"noopener\">Sheng Zhao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/imtaoqin\/\" target=\"_blank\" rel=\"noopener\">Tao Qin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a><\/p>\n<p>16:00-18:00 | Gallery B | Poster<br \/>\n<strong>Exploiting Monolingual Speech Corpora for Code-mixed Speech Recognition<\/strong><br \/>\nKaran Taneja, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/satarupa-guha-3aa52a5b\/\" target=\"_blank\" rel=\"noopener\">Satarupa Guha<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Preethi Jyothi, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/basil-abraham-91346474\/\" target=\"_blank\" rel=\"noopener\">Basil Abraham<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>16:40-17:00 | Hall 1 | Oral<br \/>\n<strong>Layer Trajectory BLSTM<\/strong><br \/>\n<strong>Eric Sun<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>16:00-18:00 | Gallery C | Poster<br \/>\n<strong>Acoustic-to-Phrase Models for Speech Recognition<\/strong><strong>\u00a0 <\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h3>Wednesday, September 18<\/h3>\n<p>11:20-11:40 | Hall 1 | Oral<br \/>\n<strong>Supervised Classifiers for Audio Impairments with Noisy Labels<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke<\/a><\/p>\n<p>10:00-12:00 | Gallery B | Poster<br \/>\nMeeting Transcription Using Asynchronous Distant Microphones<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tayoshio\/\">Takuya Yoshioka<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/didimit\/\">Dimitrios Dimitriadis<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/anstolck\/\">Andreas Stolcke<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wihintho\/\">William Hinthorn<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhuo-chen-b679aa3b\/\" target=\"_blank\" rel=\"noopener\">Zhuo Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nzeng\/\">Michael Zeng<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xdh\/\">Xuedong Huang<\/a><\/p>\n<p>13:30-15:30 | Gallery B | Poster<br \/>\n<strong>Compression of CTC-Trained Acoustic Models by Dynamic Frame-Wise Distillation or Segment-Wise N-Best Hypotheses Imitation<\/strong><br \/>\nHaisong Ding, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kaic\/\">Kai Chen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qianghuo\/\">Qiang Huo<\/a><\/p>\n<p>13:30-15:30 | Gallery B | Poster<br \/>\n<strong>Latent Dirichlet Allocation based Acoustic Data Selection for Automatic Speech Recognition<\/strong><strong><br \/>\n<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/mortaza-morrie-doulaty-44824021\/\" target=\"_blank\" rel=\"noopener\">Mortaza (Morrie) Doulaty<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Thomas Hain<\/p>\n<p>17:40-18:00 | Hall 1| Oral<br \/>\n<strong>Self-Teaching Networks<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/liang-lu-6b336838\/\" target=\"_blank\" rel=\"noopener\">Liang Lu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <strong>Eric Sun<\/strong>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>16:00-18:00 | Hall 10\/E | Poster<br \/>\n<strong>Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency Channel Squeeze and Excitation<\/strong><br \/>\nWei Xia, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito Koishida<\/a><\/p>\n<h3>Thursday, September 19<\/h3>\n<p>13:30-15:30 | Gallery C | Poster<br \/>\n<strong>Exploiting Syntactic<\/strong><br \/>\n<strong>Features in a Parsed Tree to Improve End-to-End TTS<\/strong><strong>\u00a0<\/strong><br \/>\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Lei Xie<\/p>\n<p>13:30-15:30 | Hall 12 | Special Session<br \/>\n<strong>Speech Technologies for Code-Switching in Multilingual Communities<\/strong><br \/>\nOrganizers: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kalikab\/\">Kalika Bali<\/a>, Alan W Black, Julia Hirschberg, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/susitara\/\">Sunayana Sitaram<\/a>, Thamar Solorio<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"8\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653143\/Full-Time-Opportunities-for-PhD-Students-or-Recent-Graduates-Cognition-and-Speech-Scientist\" class=\"semibold\">Cognition and Speech Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist to join our mission to change the world with TTS technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"9\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653144\/Internship-Opportunities-for-PhD-Students-Cognition-and-Speech-Scientist\" class=\"semibold\">Cognition and Speech Scientist Intern<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Internship<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist intern to join our mission to change the world with TTS technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"10\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676849\/Applied-Scientist\" class=\"semibold\">Applied Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are hiring Scientists\/Engineers with outstanding machine learning (ML) and speech recognition (SR) technology development skills to advance Microsoft&#8217;s core speech technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<br \/>\n\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"11\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676850\/Sr-Applied-Scientist\" class=\"semibold\">Sr. Applied Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">The Speech Group develops speech recognition features in Enterprise, Entertainment and Desktop and Mobile products and particularly in the voice platform that powers Microsoft 365 Search and Assistant&#8230;<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"12\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/615317\/Applied-Scientist-II\" class=\"semibold\">Applied Scientist II<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Bellevue, Washington<\/p><p style=\"font-size: 15px\">Are you interested in AI and machine learning technology, especially involving speech and language? Are you an expert in deep learning or willing to learn those advance techniques used in Cloud+AI products&#8230;<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Interspeech is the world\u2018s largest and most comprehensive conference on the science and technology of spoken language processing. Microsoft joins the conference as a proud gold sponsor. Stop by our booth to chat with our experts, see demos of our latest research and find out about career opportunities\u00a0with Microsoft.<\/p>\n","protected":false},"featured_media":606480,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-09-15","msr_enddate":"2019-09-19","msr_location":"Graz, Austria","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"https:\/\/www.interspeech2019.org\/registration\/registration_overview_and_fees\/","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13545],"msr-region":[239178],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-605979","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-human-language-technologies","msr-region-europe","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Microsoft at Interspeech 2019\",\"backgroundColor\":\"grey\",\"image\":{\"id\":606480,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p><strong>Venue:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.interspeech2019.org\/venue_and_travel\/conference-venue\/\" target=\"_blank\" rel=\"noopener noreferrer\">Messecongress Graz<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.interspeech2019.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Interspeech 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Interspeech is the world\u2018s largest and most comprehensive conference on the science and technology of spoken language processing. Microsoft joins the conference as a proud gold sponsor. Stop by our booth to chat with our experts, see demos of our latest research and find out about <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs?rt=professional\" target=\"_blank\" rel=\"noopener\">career opportunities<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0with Microsoft.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Schedule\"} --><!-- wp:freeform --><h3>Monday, September 16<\/h3>\n<p>15:30-15:50 | Hall 1 | Oral<br \/>\n<strong>Speaker Adaptation for Attention-Based End-to-End Speech Recognition<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a><\/p>\n<p>14:30-16:30 | Gallery C | Poster<br \/>\n<strong>Zero Shot Intent Classification Using Long-Short Term Memory Networks<\/strong><strong><br \/>\n<\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kywillia\/\">Kyle Williams<\/a><\/p>\n<p>14:30 \u2013 16:30 | Hall 4 | Show &amp; Tell<br \/>\n<strong>Speech Based Web Navigation for Movement Impaired Users<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/vasiliy-radostev-063947\/\" target=\"_blank\" rel=\"noopener\">Vasiliy Radostev<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/serge-berger-0933bb25\/\" target=\"_blank\" rel=\"noopener\">Serge Berger<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/justin-sina-tabrizi-a4a86851\/\" target=\"_blank\" rel=\"noopener\">Justin Tabrizi<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/pavel-pasha-kamyshev-6685b227\/\" target=\"_blank\" rel=\"noopener\">Pasha Kamyshev<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/hisami-suzuki-23547376\/\" target=\"_blank\" rel=\"noopener\">Hisami Suzuki<\/a><\/p>\n<h3>Tuesday, September 17<\/h3>\n<p>10:00-12:00 | Hall 10\/E | Poster<br \/>\n<strong>A Scalable Noisy Speech Dataset and Online Subjective Test Framework<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/ebrahim-beyrami-25150558\/\" target=\"_blank\" rel=\"noopener\">Ebrahim Beyrami<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jamie-pool\/\" target=\"_blank\" rel=\"noopener\">Jamie Pool<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/sriramsrinivasan\/\" target=\"_blank\" rel=\"noopener\">Sriram Srinivasan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke <\/a><\/p>\n<p>13:30-15:30 | Hall 10\/E | Poster<br \/>\n<strong>Speech Signal Characterization 3\/Vocal Pitch Extraction in Polyphonic Music using Convolutional Residual Network<\/strong><br \/>\nMingye Dong<em>, <\/em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jiewubj\/\" target=\"_blank\" rel=\"noopener\">Jie Wu<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/jian-luan-58b5a428\/\" target=\"_blank\" rel=\"noopener\">Jian Luan<\/a><\/p>\n<p>13:30-13:50 | Hall 1 | Oral<br \/>\n<strong>Forward-Backward Decoding for Regularizing End-to-End TTS<\/strong><br \/>\nYibin Zheng, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/xi-wang-502b2029\/\" target=\"_blank\" rel=\"noopener\">Xi Wang<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/shifeng-pan-32155638\/\" target=\"_blank\" rel=\"noopener\">Shifeng Pan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, Zhengqi Wen, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/dblp.org\/pers\/hd\/t\/Tao:Jianhua\" target=\"_blank\" rel=\"noopener\">Jianhua Tao<\/a><\/p>\n<p>13:50-14:10 | Hall 2 | Oral<br \/>\n<strong>A New GAN-based End-to-End TTS Training Algorithm<\/strong><strong>\u00a0<\/strong><br \/>\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, Lei Xie<\/p>\n<p>14:10-14:30 | Hall 2 | Oral<br \/>\n<strong>Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTS<\/strong><strong><br \/>\n<\/strong>Mutian He, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yan-deng-41157535\/\" target=\"_blank\" rel=\"noopener\">Yan Deng<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a><\/p>\n<p>16:00-18:00 | Gallery A | Poster<br \/>\n<strong>Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion<\/strong><strong>\u00a0 <\/strong><br \/>\nHao Sun, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuta\/\">Xu Tan<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/%E4%BF%8A%E4%BC%9F-%E5%B9%B2-9b9b00131\/\" target=\"_blank\" rel=\"noopener\">Jun-Wei Gan<\/a>, Hongzhi Liu, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/sheng-zhao-83689129\/\" target=\"_blank\" rel=\"noopener\">Sheng Zhao<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/imtaoqin\/\" target=\"_blank\" rel=\"noopener\">Tao Qin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a><\/p>\n<p>16:00-18:00 | Gallery B | Poster<br \/>\n<strong>Exploiting Monolingual Speech Corpora for Code-mixed Speech Recognition<\/strong><br \/>\nKaran Taneja, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/satarupa-guha-3aa52a5b\/\" target=\"_blank\" rel=\"noopener\">Satarupa Guha<\/a>, Preethi Jyothi, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/basil-abraham-91346474\/\" target=\"_blank\" rel=\"noopener\">Basil Abraham<\/a><\/p>\n<p>16:40-17:00 | Hall 1 | Oral<br \/>\n<strong>Layer Trajectory BLSTM<\/strong><br \/>\n<strong>Eric Sun<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a><\/p>\n<p>16:00-18:00 | Gallery C | Poster<br \/>\n<strong>Acoustic-to-Phrase Models for Speech Recognition<\/strong><strong>\u00a0 <\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a><\/p>\n<h3>Wednesday, September 18<\/h3>\n<p>11:20-11:40 | Hall 1 | Oral<br \/>\n<strong>Supervised Classifiers for Audio Impairments with Noisy Labels<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke<\/a><\/p>\n<p>10:00-12:00 | Gallery B | Poster<br \/>\nMeeting Transcription Using Asynchronous Distant Microphones<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tayoshio\/\">Takuya Yoshioka<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/didimit\/\">Dimitrios Dimitriadis<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/anstolck\/\">Andreas Stolcke<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wihintho\/\">William Hinthorn<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/zhuo-chen-b679aa3b\/\" target=\"_blank\" rel=\"noopener\">Zhuo Chen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nzeng\/\">Michael Zeng<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xdh\/\">Xuedong Huang<\/a><\/p>\n<p>13:30-15:30 | Gallery B | Poster<br \/>\n<strong>Compression of CTC-Trained Acoustic Models by Dynamic Frame-Wise Distillation or Segment-Wise N-Best Hypotheses Imitation<\/strong><br \/>\nHaisong Ding, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kaic\/\">Kai Chen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qianghuo\/\">Qiang Huo<\/a><\/p>\n<p>13:30-15:30 | Gallery B | Poster<br \/>\n<strong>Latent Dirichlet Allocation based Acoustic Data Selection for Automatic Speech Recognition<\/strong><strong><br \/>\n<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/mortaza-morrie-doulaty-44824021\/\" target=\"_blank\" rel=\"noopener\">Mortaza (Morrie) Doulaty<\/a>, Thomas Hain<\/p>\n<p>17:40-18:00 | Hall 1| Oral<br \/>\n<strong>Self-Teaching Networks<\/strong><strong>\u00a0<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/liang-lu-6b336838\/\" target=\"_blank\" rel=\"noopener\">Liang Lu<\/a>, <strong>Eric Sun<\/strong>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a><\/p>\n<p>16:00-18:00 | Hall 10\/E | Poster<br \/>\n<strong>Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency Channel Squeeze and Excitation<\/strong><br \/>\nWei Xia, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito Koishida<\/a><\/p>\n<h3>Thursday, September 19<\/h3>\n<p>13:30-15:30 | Gallery C | Poster<br \/>\n<strong>Exploiting Syntactic<\/strong><br \/>\n<strong>Features in a Parsed Tree to Improve End-to-End TTS<\/strong><strong>\u00a0<\/strong><br \/>\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, Lei Xie<\/p>\n<p>13:30-15:30 | Hall 12 | Special Session<br \/>\n<strong>Speech Technologies for Code-Switching in Multilingual Communities<\/strong><br \/>\nOrganizers: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kalikab\/\">Kalika Bali<\/a>, Alan W Black, Julia Hirschberg, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/susitara\/\">Sunayana Sitaram<\/a>, Thamar Solorio<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Career Opportunities\"} --><!-- wp:freeform --><p>\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"8\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653143\/Full-Time-Opportunities-for-PhD-Students-or-Recent-Graduates-Cognition-and-Speech-Scientist\" class=\"semibold\">Cognition and Speech Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist to join our mission to change the world with TTS technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"9\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653144\/Internship-Opportunities-for-PhD-Students-Cognition-and-Speech-Scientist\" class=\"semibold\">Cognition and Speech Scientist Intern<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Internship<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist intern to join our mission to change the world with TTS technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"10\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676849\/Applied-Scientist\" class=\"semibold\">Applied Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">We are hiring Scientists\/Engineers with outstanding machine learning (ML) and speech recognition (SR) technology development skills to advance Microsoft&#8217;s core speech technology.<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<br \/>\n\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"11\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676850\/Sr-Applied-Scientist\" class=\"semibold\">Sr. Applied Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p><p style=\"font-size: 15px\">The Speech Group develops speech recognition features in Enterprise, Entertainment and Desktop and Mobile products and particularly in the voice platform that powers Microsoft 365 Search and Assistant&#8230;<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"12\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/615317\/Applied-Scientist-II\" class=\"semibold\">Applied Scientist II<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\t<\/p><div style=\"height: 5px\"><\/div><p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p><p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Bellevue, Washington<\/p><p style=\"font-size: 15px\">Are you interested in AI and machine learning technology, especially involving speech and language? Are you an expert in deep learning or willing to learn those advance techniques used in Cloud+AI products&#8230;<\/p><p>\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"Interspeech is the world\u2018s largest and most comprehensive conference on the science and technology of spoken language processing. Microsoft joins the conference as a proud gold sponsor. Stop by our booth to chat with our experts, see demos of our latest research and find out about <a href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs?rt=professional\" target=\"_blank\" rel=\"noopener\">career opportunities<\/a>\u00a0with Microsoft."},{"id":1,"name":"Schedule","content":"<h3>Monday, September 16<\/h3>\r\n15:30-15:50 | Hall 1 | Oral\r\n<strong>Speaker Adaptation for Attention-Based End-to-End Speech Recognition<\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a>\r\n\r\n14:30-16:30 | Gallery C | Poster\r\n<strong>Zero Shot Intent Classification Using Long-Short Term Memory Networks<\/strong><strong>\r\n<\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kywillia\/\">Kyle Williams<\/a>\r\n\r\n14:30 \u2013 16:30 | Hall 4 | Show &amp; Tell\r\n<strong>Speech Based Web Navigation for Movement Impaired Users<\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/vasiliy-radostev-063947\/\" target=\"_blank\" rel=\"noopener\">Vasiliy Radostev<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/serge-berger-0933bb25\/\" target=\"_blank\" rel=\"noopener\">Serge Berger<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/justin-sina-tabrizi-a4a86851\/\" target=\"_blank\" rel=\"noopener\">Justin Tabrizi<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/pavel-pasha-kamyshev-6685b227\/\" target=\"_blank\" rel=\"noopener\">Pasha Kamyshev<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/hisami-suzuki-23547376\/\" target=\"_blank\" rel=\"noopener\">Hisami Suzuki<\/a>\r\n<h3>Tuesday, September 17<\/h3>\r\n10:00-12:00 | Hall 10\/E | Poster\r\n<strong>A Scalable Noisy Speech Dataset and Online Subjective Test Framework<\/strong><strong>\u00a0<\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/ebrahim-beyrami-25150558\/\" target=\"_blank\" rel=\"noopener\">Ebrahim Beyrami<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/jamie-pool\/\" target=\"_blank\" rel=\"noopener\">Jamie Pool<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/sriramsrinivasan\/\" target=\"_blank\" rel=\"noopener\">Sriram Srinivasan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke <\/a>\r\n\r\n13:30-15:30 | Hall 10\/E | Poster\r\n<strong>Speech Signal Characterization 3\/Vocal Pitch Extraction in Polyphonic Music using Convolutional Residual Network<\/strong>\r\nMingye Dong<em>, <\/em><a href=\"https:\/\/www.linkedin.com\/in\/jiewubj\/\" target=\"_blank\" rel=\"noopener\">Jie Wu<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/jian-luan-58b5a428\/\" target=\"_blank\" rel=\"noopener\">Jian Luan<\/a>\r\n\r\n13:30-13:50 | Hall 1 | Oral\r\n<strong>Forward-Backward Decoding for Regularizing End-to-End TTS<\/strong>\r\nYibin Zheng, <a href=\"https:\/\/www.linkedin.com\/in\/xi-wang-502b2029\/\" target=\"_blank\" rel=\"noopener\">Xi Wang<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/shifeng-pan-32155638\/\" target=\"_blank\" rel=\"noopener\">Shifeng Pan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, Zhengqi Wen, <a href=\"https:\/\/dblp.org\/pers\/hd\/t\/Tao:Jianhua\" target=\"_blank\" rel=\"noopener\">Jianhua Tao<\/a>\r\n\r\n13:50-14:10 | Hall 2 | Oral\r\n<strong>A New GAN-based End-to-End TTS Training Algorithm<\/strong><strong>\u00a0<\/strong>\r\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, Lei Xie\r\n\r\n14:10-14:30 | Hall 2 | Oral\r\n<strong>Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTS<\/strong><strong>\r\n<\/strong>Mutian He, <a href=\"https:\/\/www.linkedin.com\/in\/yan-deng-41157535\/\" target=\"_blank\" rel=\"noopener\">Yan Deng<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>\r\n\r\n16:00-18:00 | Gallery A | Poster\r\n<strong>Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion<\/strong><strong>\u00a0 <\/strong>\r\nHao Sun, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuta\/\">Xu Tan<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/%E4%BF%8A%E4%BC%9F-%E5%B9%B2-9b9b00131\/\" target=\"_blank\" rel=\"noopener\">Jun-Wei Gan<\/a>, Hongzhi Liu, <a href=\"https:\/\/www.linkedin.com\/in\/sheng-zhao-83689129\/\" target=\"_blank\" rel=\"noopener\">Sheng Zhao<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/imtaoqin\/\" target=\"_blank\" rel=\"noopener\">Tao Qin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a>\r\n\r\n16:00-18:00 | Gallery B | Poster\r\n<strong>Exploiting Monolingual Speech Corpora for Code-mixed Speech Recognition<\/strong>\r\nKaran Taneja, <a href=\"https:\/\/www.linkedin.com\/in\/satarupa-guha-3aa52a5b\/\" target=\"_blank\" rel=\"noopener\">Satarupa Guha<\/a>, Preethi Jyothi, <a href=\"https:\/\/www.linkedin.com\/in\/basil-abraham-91346474\/\" target=\"_blank\" rel=\"noopener\">Basil Abraham<\/a>\r\n\r\n16:40-17:00 | Hall 1 | Oral\r\n<strong>Layer Trajectory BLSTM<\/strong>\r\n<strong>Eric Sun<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a>\r\n\r\n16:00-18:00 | Gallery C | Poster\r\n<strong>Acoustic-to-Phrase Models for Speech Recognition<\/strong><strong>\u00a0 <\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/yashesh-yash-gaur-335b1618\/\" target=\"_blank\" rel=\"noopener\">Yashesh Gaur<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jinyli\/\">Jinyu Li<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/zhong-meng-39a6224b\/\" target=\"_blank\" rel=\"noopener\">Zhong Meng<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a>\r\n<h3>Wednesday, September 18<\/h3>\r\n11:20-11:40 | Hall 1 | Oral\r\n<strong>Supervised Classifiers for Audio Impairments with Noisy Labels<\/strong><strong>\u00a0<\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/chandanreddy\/\" target=\"_blank\" rel=\"noopener\">Chandan Karadagur Ananda Reddy<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/rosscutler\/\" target=\"_blank\" rel=\"noopener\">Ross Cutler<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/johannes\/\">Johannes Gehrke<\/a>\r\n\r\n10:00-12:00 | Gallery B | Poster\r\nMeeting Transcription Using Asynchronous Distant Microphones\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tayoshio\/\">Takuya Yoshioka<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/didimit\/\">Dimitrios Dimitriadis<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/anstolck\/\">Andreas Stolcke<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wihintho\/\">William Hinthorn<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/zhuo-chen-b679aa3b\/\" target=\"_blank\" rel=\"noopener\">Zhuo Chen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nzeng\/\">Michael Zeng<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xdh\/\">Xuedong Huang<\/a>\r\n\r\n13:30-15:30 | Gallery B | Poster\r\n<strong>Compression of CTC-Trained Acoustic Models by Dynamic Frame-Wise Distillation or Segment-Wise N-Best Hypotheses Imitation<\/strong>\r\nHaisong Ding, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kaic\/\">Kai Chen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qianghuo\/\">Qiang Huo<\/a>\r\n\r\n13:30-15:30 | Gallery B | Poster\r\n<strong>Latent Dirichlet Allocation based Acoustic Data Selection for Automatic Speech Recognition<\/strong><strong>\r\n<\/strong><a href=\"https:\/\/www.linkedin.com\/in\/mortaza-morrie-doulaty-44824021\/\" target=\"_blank\" rel=\"noopener\">Mortaza (Morrie) Doulaty<\/a>, Thomas Hain\r\n\r\n17:40-18:00 | Hall 1| Oral\r\n<strong>Self-Teaching Networks<\/strong><strong>\u00a0<\/strong>\r\n<a href=\"https:\/\/www.linkedin.com\/in\/liang-lu-6b336838\/\" target=\"_blank\" rel=\"noopener\">Liang Lu<\/a>, <strong>Eric Sun<\/strong>, <a href=\"https:\/\/www.linkedin.com\/in\/yifan-gong-4a06162\/\" target=\"_blank\" rel=\"noopener\">Yifan Gong<\/a>\r\n\r\n16:00-18:00 | Hall 10\/E | Poster\r\n<strong>Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency Channel Squeeze and Excitation<\/strong>\r\nWei Xia, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito Koishida<\/a>\r\n<h3>Thursday, September 19<\/h3>\r\n13:30-15:30 | Gallery C | Poster\r\n<strong>Exploiting Syntactic<\/strong>\r\n<strong>Features in a Parsed Tree to Improve End-to-End TTS<\/strong><strong>\u00a0<\/strong>\r\nHaohan Guo, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/frankkps\/\">Frank Soong<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/lei-he-58953073\/\" target=\"_blank\" rel=\"noopener\">Lei He<\/a>, Lei Xie\r\n\r\n13:30-15:30 | Hall 12 | Special Session\r\n<strong>Speech Technologies for Code-Switching in Multilingual Communities<\/strong>\r\nOrganizers: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kalikab\/\">Kalika Bali<\/a>, Alan W Black, Julia Hirschberg, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/susitara\/\">Sunayana Sitaram<\/a>, Thamar Solorio"},{"id":2,"name":"Career Opportunities","content":"[row]\r\n[card title=\"Cognition and Speech Scientist\" url=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653143\/Full-Time-Opportunities-for-PhD-Students-or-Recent-Graduates-Cognition-and-Speech-Scientist\" ]\r\n<div style=\"height: 5px\"><\/div>\r\n<p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p>\r\n<p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p>\r\n<p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist to join our mission to change the world with TTS technology.<\/p>\r\n[\/card]\r\n\r\n[card title=\"Cognition and Speech Scientist Intern\" url=\"https:\/\/careers.microsoft.com\/students\/us\/en\/job\/653144\/Internship-Opportunities-for-PhD-Students-Cognition-and-Speech-Scientist\" ]\r\n<div style=\"height: 5px\"><\/div>\r\n<p style=\"font-size: 15px\"><strong>Type<\/strong>: Internship<\/p>\r\n<p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p>\r\n<p style=\"font-size: 15px\">We are looking for a motivated, self-driven software development engineer\/scientist intern to join our mission to change the world with TTS technology.<\/p>\r\n[\/card]\r\n\r\n[card title=\"Applied Scientist\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676849\/Applied-Scientist\" ]\r\n<div style=\"height: 5px\"><\/div>\r\n<p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p>\r\n<p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p>\r\n<p style=\"font-size: 15px\">We are hiring Scientists\/Engineers with outstanding machine learning (ML) and speech recognition (SR) technology development skills to advance Microsoft's core speech technology.<\/p>\r\n[\/card]\r\n[\/row]\r\n[row]\r\n[card title=\"Sr. Applied Scientist\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/676850\/Sr-Applied-Scientist\" ]\r\n<div style=\"height: 5px\"><\/div>\r\n<p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p>\r\n<p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Redmond, Washington<\/p>\r\n<p style=\"font-size: 15px\">The Speech Group develops speech recognition features in Enterprise, Entertainment and Desktop and Mobile products and particularly in the voice platform that powers Microsoft 365 Search and Assistant...<\/p>\r\n[\/card]\r\n\r\n[card title=\"Applied Scientist II\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/615317\/Applied-Scientist-II\" ]\r\n<div style=\"height: 5px\"><\/div>\r\n<p style=\"font-size: 15px\"><strong>Type<\/strong>: Full-time<\/p>\r\n<p style=\"font-size: 15px\"><strong>Lab\/Location<\/strong>: Bellevue, Washington<\/p>\r\n<p style=\"font-size: 15px\">Are you interested in AI and machine learning technology, especially involving speech and language? Are you an expert in deep learning or willing to learn those advance techniques used in Cloud+AI products...<\/p>\r\n[\/card]\r\n[\/row]"}],"msr_startdate":"2019-09-15","msr_enddate":"2019-09-19","msr_event_time":"","msr_location":"Graz, Austria","msr_event_link":"https:\/\/www.interspeech2019.org\/registration\/registration_overview_and_fees\/","msr_event_recording_link":"","msr_startdate_formatted":"September 15, 2019","msr_register_text":"Watch now","msr_cta_link":"https:\/\/www.interspeech2019.org\/registration\/registration_overview_and_fees\/","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-960x540.jpg\" class=\"img-object-cover\" alt=\"a view of a city\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/08\/Interspeech_Graz_Austria-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Interspeech is the world\u2018s largest and most comprehensive conference on the science and technology of spoken language processing. Microsoft joins the conference as a proud gold sponsor. Stop by our booth to chat with our experts, see demos of our latest research and find out about career opportunities\u00a0with Microsoft.","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[606492,607386],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/605979","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/605979\/revisions"}],"predecessor-version":[{"id":1147026,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/605979\/revisions\/1147026"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/606480"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=605979"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=605979"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=605979"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=605979"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=605979"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=605979"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=605979"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=605979"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=605979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}