Namaskara! I’m an Applied Scientist at Microsoft Research India in Bangalore, India, where I work on Speech Processing for Multilingual Communities. Currently, the focus of my work is on building Deep-learning based Acoustic and Language Models that can handle code-switching without needing a large amount of code-switched training data. At MSRI, I am part of Project Mélange in which we look at various aspects of code-switching and mixing, including how and why multilinguals code-switch.
Last year, we organized the inaugural Special Session at Interspeech 2017 on Speech Technologies for Code-switching, and we will be organizing the second edition of this special session at Interspeech 2018! We also have an exciting new challenge and special session on building ASR systems for low resource Indian languages. Data and recipes are available for the challenge, so please consider participating if you are interested.
I finished my PhD in December 2016 at the Language Technologies Institute, Carnegie Mellon University. I worked on Text-to-Speech systems with my advisor Alan W Black, and my thesis was on pronunciation modeling for low-resource languages. In what seems like a past life, from 2010-2012, I was a Masters student at CMU with Jack Mostow, and I worked on children’s oral reading prosody. I also interned with Microsoft Research India in Summer 2012 and we built a low-vocabulary ASR system for farmers in rural central India.
My research goal is to make all the content in the world available to all the people in the world, regardless of the language they speak, their level of education, their age, gender, and their special needs. So far, my main expertise has been in multilingual systems, particularly in dealing with languages that have very few linguistic resources.