Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain…
Partner Research Manager in Deep Learning Technology Center (DLTC) at Microsoft Research, Redmond. I work on deep learning for text and image processing (see our JICAI2016 Tutorial or MS internal site) and lead the development of AI systems for dialogue, machine reading comprehension (MRC), question answering (QA), and enterprise applications.
We are hiring Researchers with strengths in ML and NLP, and Software Engineers with rich product experience.
DSSM: We have developed a series of deep semantic similarity models (DSSM, also a.k.a. Sent2Vec), which have been used for many text and image processing tasks, including web search [Huang et al. 2013, Shen et al. 2014], recommendation [Gao et al. 2014a], machine translation [Gao et al. 2014b], and QA [Yih et al. 2015].
Dialogue: We have developed neural network models for social bots trained on Twitter data [project site] and task-completion bots [Lipton et al. 2016; Bhuwan et al. 2016] trained via reinforcement learning using a user simulator.
From 2006 to 2014, I was Principal Researcher at Natural Language Processing Group at Microsoft Research, Redmond. I worked on Web search, query understanding and reformulation, ads prediction, and statistical machine translation.
From 2005 to 2006, I was a research lead in Natural Interactive Services Division at Microsoft. I worked on Project X, an effort of developing natural user interface for Windows.
From 1999 to 2005, I was Research Lead in Natural Language Computing Group at Microsoft Research Asia. I, together with my colleagues, developed the first Chinese speech recognition system released with Microsoft Office, the Chinese/Japanese Input Method Editors (IME) which were the leading products in the market, and the natural language platform for Windows Vista.
Currently, I live with my family in Woodinville, WA.
Established: June 29, 2016
MSR Image Recognition Challenge (IRC) @ACM Multimedia 2016 Latest Updates: Participants information disclosed in "Team Information" section below 6/21/2016: Evaluation Result Announced in "Evaluation Result " section below. 6/17/2016: Evaluation finished. 14 teams finished the grand challenge! 6/13/2016: Evaluation started. 6/13/2016: Dry…
Established: April 9, 2015
We introduce a novel approach for automatically generating image descriptions. Visual detectors, language models, and deep multimodal similarity models are learned directly from a dataset of image captions. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a…
Established: June 1, 2014
This project aims to enable people to converse with their devices. We are trying to teach devices to engage with humans using human language in ways that appear seamless and natural to humans. Our research focuses on statistical methods by…
Established: April 4, 2012
Statistical Parsing and Linguistic Analysis Toolkit is a linguistic analysis toolkit. Its main goal is to allow easy access to the linguistic analysis tools produced by the Natural Language Processing group at Microsoft Research. The tools include both traditional linguistic…
Established: May 9, 2008
The Microsoft Research ESL Assistant is a web service that provides correction suggestions for typical ESL (English as a Second Language) errors. Such errors include, for example, the choice of determiners (the/a) and the choice…
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information RetrievalHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, Rabab Ward, March 1, 2016,
Deep Reinforcement Learning with a Combinatorial Action Space for Predicting and Tracking Popular Discussion ThreadsJi He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong Li, Li Deng, July 31, 2016,
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information RetrievalXiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, Ye-Yi Wang, in NAACL, NAACL, May 1, 2015,
A Voted Regularized Dual Averaging Method for Large-Scale Discriminative Training in Natural Language ProcessingJianfeng Gao, Tianbing Xu, Lin Xiao, Xiaodong He, September 1, 2013,
A Comparative Study of Bing Web N-gram Language Models for Web Search and Natural Language ProcessingJianfeng Gao, Patrick Nguyen, Xiaolong(Shiao-Long) Li, Chris Thrasher, Mu Li, Kuansan Wang, in Proceeding of the 33rd Annual ACM SIGIR Conference, Association for Computing Machinery, Inc., July 19, 2010,
A Comparative Study of Discriminative Methods for Reranking LVCSR N-Best Hypotheses in Domain Adaptation and GeneralizationZhengyu Zhou, Jianfeng Gao, Frank Soong, Helen Meng, ACL/SIGPARSE, April 1, 2006,
Resolving query translation ambiguity using a decaying co-occurrence model and syntactic dependency relationsJianfeng Gao, Jian-Yun Nie, Hongzhao He, Weijun Chen, Ming Zhou, in SIGIR, May 3, 2002,
October 31, 2016
August 4, 2014
Li Deng, Eric Xing, Xiaodong He, Jianfeng Gao, Christopher Manning, Paul Smolensky, and Jeff A Bilmes
MSR, Carnegie Mellon University, Microsoft Research, Redmond, MSR Redmond, Stanford, Johns Hopkins University, University of Washington
June 6, 2008
Danyel Fisher, Douglas Downey, Chris Quirk, Scott Drellishak, Kelly O'Hara, Emily M. Bender, Sumit Basu, Matthew Hurst, Arnd Christian König, Michael Gamon, Chris Brockett, Dmitriy Belenko, Bill Dolan, Jianfeng Gao, and Lucy Vanderwende
Scalable Language-Model-Building Tool
This scalable language-model tool is used to build language models from large amounts of data. It supports modified absolute discounting and Kneser-Ney smoothing. The tool has been used successfully to build a seven-gram language model on 40 billion words within eight hours.
Size: 11 MB
Bayesian Estimators for Unsupervised HMM Part-of-Speech Tagger
NLP Data Sets for Comparative Study of Parameter-Estimation Methods