Yizhe Zhang is a researcher of NLP group at Microsoft Research, his main research focus are on natural language processing and deep generative models. He has particular interests in neural conversation system and text generation. Yizhe obtained his Ph.D. from Duke university in 2018. During his study at Duke, he was working on various Bayesian statistics, statistical machine learning and natural language processing problems. He also has a broad interests in many machine learning and statistical problems (including GAN, VAE, MCMC…) and hope to see any connections with current NLP research.
Please send me a CV if you are a Ph.D. student working on text generation and interested in applying an internship.
My current research topics include but not limited to:
1). Large-scale transformer-based pretraining.
2). Non-autoregressive text generation.
3). Adversarial attacking for NLP.
4). Constraint/controllable text generation.
5). Self-playing for open-domain conversational agent.
6). Language toxicity detection and prevention.
7). Other interplays between deep generative models, RL, MCMC and NLP.
- [April. 2020] Two papers accepted by ACL 2020
- [Aug. 2019] Serving as local web chairs for ACL 2020.
- [Aug. 2019] Our SIGDIAL paper is nominated as Best paper candidates.
- [May. 2019] Two papers accepted by ACL 2019
- [May. 2019] Serving as a meta reviewer for AAAI 2019
- [July. 2018] Our recent papers “Adversarial Text Generation via Feature-Mover’s Distance” and “Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization” are accepted by NIPS 2018.