Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain classification, comparing deep learning based approaches with conventional ones like conditional random fields. This project focuses on a holistic multi-domain, multi-task (i.e. slot filling, domain and intent detection) modeling approach to estimate complete semantic frames…
Partner Research Manager in Business AI at Microsoft AI & Research. From 2014 to 2017, I was Partner Research Manager at Deep Learning Technology Center (DLTC) at Microsoft Research, Redmond. I lead the development of AI solutions to Predictive Sales and Marketing. I also work on deep learning for text and image processing (see our JICAI2016 Tutorial or MS internal site) and lead the development of AI systems for dialogue, machine reading comprehension (MRC), and question answering (QA).
We are hiring Researchers with strengths in ML and NLP, and Software Engineers with rich product experience.
DSSM: We have developed a series of deep semantic similarity models (DSSM, also a.k.a. Sent2Vec), which have been used for many text and image processing tasks, including web search [Huang et al. 2013, Shen et al. 2014], recommendation [Gao et al. 2014a], machine translation [Gao et al. 2014b], and QA [Yih et al. 2015].
Dialogue: We have developed neural network models for social bots trained on Twitter data [project site] and task-completion bots [project site]trained via reinforcement learning using a user simulator.
From 2006 to 2014, I was Principal Researcher at Natural Language Processing Group at Microsoft Research, Redmond. I worked on Web search, query understanding and reformulation, ads prediction, and statistical machine translation.
From 2005 to 2006, I was a research lead in Natural Interactive Services Division at Microsoft. I worked on Project X, an effort of developing natural user interface for Windows.
From 1999 to 2005, I was Research Lead in Natural Language Computing Group at Microsoft Research Asia. I, together with my colleagues, developed the first Chinese speech recognition system released with Microsoft Office, the Chinese/Japanese Input Method Editors (IME) which were the leading products in the market, and the natural language platform for Windows Vista.
Currently, I live with my family in Woodinville, WA.
Established: June 29, 2016
MSR Image Recognition Challenge (IRC) @ACM Multimedia 2016 Import Dates/Updates: New! We are hosting new challenges at ICCV 2017. Visit MsCeleb.org for more details. Participants information disclosed in "Team Information" section below 6/21/2016: Evaluation Result Announced in "Evaluation Result " section below. 6/17/2016: Evaluation finished. 14 teams finished the grand challenge! 6/13/2016: Evaluation started. 6/13/2016: Dry run finished, 14 out of 19 teams passed, see details in "Update Details" below 6/10/2016: Dry run update 3: 8 teams…
Established: April 9, 2015
We introduce a novel approach for automatically generating image descriptions. Visual detectors, language models, and deep multimodal similarity models are learned directly from a dataset of image captions. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. Human judges consider the captions to be as good as or better than humans 34% of the time.
Established: June 1, 2014
This project aims to enable people to converse with their devices. We are trying to teach devices to engage with humans using human language in ways that appear seamless and natural to humans. Our research focuses on statistical methods by which devices can learn from human-human conversational interactions and can situate responses in the verbal context and in physical or virtual environments. Natural and Engaging Agents that process human language will play a growing role…
Established: April 4, 2012
Statistical Parsing and Linguistic Analysis Toolkit is a linguistic analysis toolkit. Its main goal is to allow easy access to the linguistic analysis tools produced by the Natural Language Processing group at Microsoft Research. The tools include both traditional linguistic analysis tools such as part-of-speech taggers and parsers, and more recent developments, such as sentiment analysis (identifying whether a particular of text has positive or negative sentiment towards its focus) Demo URL: You can find…
Established: May 9, 2008
The Microsoft Research ESL Assistant is a web service that provides correction suggestions for typical ESL (English as a Second Language) errors. Such errors include, for example, the choice of determiners (the/a) and the choice of prepositions. The web service also provides word choice suggestions from a thesaurus. In order to help the user make decisions on whether to accept a suggestion, the service displays "before and after" web search…
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information RetrievalHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, Rabab Ward, March 1, 2016,
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information RetrievalHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, Rabab Ward, IEEE/ACM Transactions on Audio, Speech, and Language Processing, January 21, 2016,
Deep Reinforcement Learning with a Combinatorial Action Space for Predicting and Tracking Popular Discussion ThreadsJi He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong Li, Li Deng, in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), July 31, 2016,
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information RetrievalXiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, Ye-Yi Wang, in NAACL, NAACL, May 1, 2015,
A Voted Regularized Dual Averaging Method for Large-Scale Discriminative Training in Natural Language ProcessingJianfeng Gao, Tianbing Xu, Lin Xiao, Xiaodong He, September 1, 2013,
A Comparative Study of Bing Web N-gram Language Models for Web Search and Natural Language ProcessingJianfeng Gao, Patrick Nguyen, Xiaolong(Shiao-Long) Li, Chris Thrasher, Mu Li, Kuansan Wang, in Proceeding of the 33rd Annual ACM SIGIR Conference, Association for Computing Machinery, Inc., July 19, 2010,
A Comparative Study of Discriminative Methods for Reranking LVCSR N-Best Hypotheses in Domain Adaptation and GeneralizationZhengyu Zhou, Jianfeng Gao, Frank Soong, Helen Meng, ACL/SIGPARSE, April 1, 2006,
Resolving query translation ambiguity using a decaying co-occurrence model and syntactic dependency relationsJianfeng Gao, Jian-Yun Nie, Hongzhao He, Weijun Chen, Ming Zhou, in SIGIR, May 3, 2002,
October 31, 2016
August 4, 2014
Li Deng, Eric Xing, Xiaodong He, Jianfeng Gao, Christopher Manning, Paul Smolensky, and Jeff A Bilmes
MSR, Carnegie Mellon University, Microsoft Research, Redmond, MSR Redmond, Stanford, Johns Hopkins University, University of Washington
June 6, 2008
Danyel Fisher, Douglas Downey, Chris Quirk, Scott Drellishak, Kelly O'Hara, Emily M. Bender, Sumit Basu, Matthew Hurst, Arnd Christian König, Michael Gamon, Chris Brockett, Dmitriy Belenko, Bill Dolan, Jianfeng Gao, and Lucy Vanderwende
Scalable Language-Model-Building Tool
This scalable language-model tool is used to build language models from large amounts of data. It supports modified absolute discounting and Kneser-Ney smoothing. The tool has been used successfully to build a seven-gram language model on 40 billion words within eight hours.
Size: 11 MB
Bayesian Estimators for Unsupervised HMM Part-of-Speech Tagger
NLP Data Sets for Comparative Study of Parameter-Estimation Methods