Interpretability in NLP: Moving Beyond Vision

Date

October 10, 2019

Speaker

Shuoyang Ding

Affiliation

Johns Hopkins University

Overview

Deep neural network models have been extremely successful for natural language processing (NLP) applications in recent years, but one complaint they often suffer from is their lack of interpretability. On the other hand, the field of computer vision has navigated their own way of improving interpretability for deep learning models, most notably with post-hoc interpretation methods such as saliency. In this talk, we investigate the possibility of deploying these interpretation methods to natural language processing applications. Our study covers common NLP applications such as language modeling and neural machine translation, and we stress the necessity of quantitative evaluations of interpretations apart from qualitative evaluations. We show that this adaptation is generally feasible, while also pointing out some shortcomings of the current practice that may shed light on future research directions.

[SLIDES]

Speakers

Shuoyang Ding

Shuoyang Ding is a PhD candidate in the Center for Language and Speech Processing (CLSP), Johns Hopkins University advised by Professor Philipp Koehn. His main research interest is neural machine translation, particularly improvement on the interpretability and robustness of neural machine translation. Apart from that, he has also done some work on syntactic parsing and speech recognition, the latter of which won the best student paper award at Interspeech 2018. Before joining Johns Hopkins, Shuoyang got his Bachelor’s degree in Beijing University of Posts & Telecommunications (BUPT), while spending the final year in undergrad working with Weiwei Sun as a research assistant in Peking University, with a focus on semantic parsing and Chinese word segmentation.