Portrait of Y. C. Ju

Y. C. Ju

Senior RSDE

About

Yun-Cheng (Y.C.) Ju joined Microsoft in 1994. He received a B.S. in Electrical Engineering from National Taiwan University in 1984 and a master’s and Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1990 and 1992, respectively.  Prior to joining Microsoft, he worked at Bell Labs for two years.
Research interests

Statistical Model Learning with Minimum Labeled Data — robust model training, automatic data acquisition, semi-supervised learning.
Information Retrieval–search of structured data.
Spoken Language Understanding: robust understanding, parsing, semantic modeling, call routing, voice search.
Spoken Language Systems: rapid prototyping of speech understanding systems, language learning tools for semantic grammar development.
Speech Recognition: language modeling, language model adaptation.
Statistical Machine Translation.

Projects

Dialog and Conversational Systems Research

Established: March 14, 2014

Conversational systems interact with people through language to assist, enable, or entertain. Research at Microsoft spans dialogs that use language exclusively, or in conjunctions with additional modalities like gesture; where language is spoken or in text; and in a variety of settings, such as conversational systems in apps or devices, and situated interactions in the real world. Projects Spoken Language Understanding

Spoken Language Understanding

Established: May 1, 2013

Spoken language understanding (SLU) is an emerging field in between the areas of speech processing and natural language processing. The term spoken language understanding has largely been coined for targeted understanding of human speech directed at machines. This project covers our research on SLU tasks such as domain detection, intent determination, and slot filling, using data-driven methods. Projects Deeper Understanding: Moving beyond shallow targeted understanding towards building domain independent SLU models. Scaling SLU: Quickly bootstrapping SLU…

Understand User’s Intent from Speech and Text

Established: December 17, 2008

Understanding what users like to do/need to get is critical in human computer interaction. When natural user interface like speech or natural language is used in human-computer interaction, such as in a spoken dialogue system or with an internet search engine, language understanding becomes an important issue. Intent understanding is about identifying the action a user wants a computer to take or the information she/he would like to obtain, conveyed in a spoken utterance or…

Language Modeling for Speech Recognition

Established: January 29, 2004

Did I just say "It's fun to recognize speech?" or "It's fun to wreck a nice beach?" It's hard to tell because they sound about the same. Of course, it's a lot more likely that I would say "recognize speech" than "wreck a nice beach." Language models help a speech recognizer figure out how likely a word sequence is, independent of the acoustics. This lets the recognizer make the right guess when two different sentences…

Personalized Language Model for improved accuracy

Established: January 29, 2004

Traditionally speech recognition systems are built with models that are an average of many different users. A speaker-independent model is provided that works reasonably well for a large percentage of users. But the accuracy can be improved if the acoustic model is personalized to the given user. We have built a service that constantly looks at the user's sent emails to personalize the language model and we've observed a 30% reduction in error rate for…

Multimodal Conversational User Interface

Established: January 29, 2004

Researchers in the Speech Technology group at Microsoft are working to allow the computer to travel through our living spaces as a handy electronic HAL pal that answers questions, arrange our calendars, and send messages to our friends and family. Most of us use computers to create text, understand numbers, view images, and send messages. There's only one problem with this marvelous machine. Our computer lives on a desktop, and though we command it with…

Automatic Grammar Induction

Established: February 19, 2002

Automatic learning of speech recognition grammars from example sentences to ease the development of spoken language systems. Researcher Ye-Yi Wang wants to have more time for vacation, so he is teaching his computer to do some work for him. Wang has been working on Spoken Language Understanding for the MiPad project since he was hired to Microsoft Research. He has developed a robust parser and the understanding grammars for several projects. "Grammar development is painful…

Publications

2014

2013

2012

2011

2010

2009

2008

2007

2004

1998

Projects

Other

Research interests



  • Statistical Model Learning with Minimum Labeled Data — robust model training, automatic data acquisition, semi-supervised learning.

  • Information Retrieval–search of structured data.

  • Spoken Language Understanding: robust understanding, parsing, semantic modeling, call routing, voice search.

  • Spoken Language Systems: rapid prototyping of speech understanding systems, language learning tools for semantic grammar development.

  • Speech Recognition: language modeling, language model adaptation.

  • Statistical Machine Translation.

 


Background



  • Yun-Cheng (Y.C.) Ju joined Microsoft in 1994. He received a B.S. in Electrical Engineering from National Taiwan University in 1984 and a master’s and Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1990 and 1992, respectively.  Prior to joining Microsoft, he worked at Bell Labs for two years.