3rd Pacific Northwest Regional NLP Workshop: Keynote & Afternoon Talks 1


April 28, 2014


Keynote Talk: Chris Burges (MSR). Do we really need machines to comprehend? – and, two datasets for machine comprehension

The last few years have seen major advances in AI: in web search, computer vision, speech, and machine translation, all achieved without solving the problem of machine comprehension, and in fact with pretty much no reference to the term “AI”. Is it the case that larger datasets, faster computers, and cleverer algorithms will provide all we’ll ever need to solve most problems we’d like to solve, without recourse to the deep semantic modeling of language? The second part of my title reveals my own stance on this, but it is always a good exercise to ask hard questions and consider the simplest possible approaches first. In the second part of the talk I will describe two datasets that we recently created to help researchers attack the problem of the domain independent machine comprehension of language.

Afternoon Talks 1:

14:30 ConVis: A Visual Text Analytic System for Exploring Asynchronous Online Conversations, Enamul Hoque and Giuseppe Carenini

14:50 Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation, Majid Razmara, Maryam Siahbani, Gholamreza Haffari and Anoop Sarkar


Chris J.C. Burges, Enamul Hoque, and Ramtin Mehdizadeh Seraj

Chris Burges received his PhD in theoretical physics, on constraints on new particle mass scales and on models of early universe cosmology, at Brandeis University in 1984. After a two-year post doc at the theoretical physics department at MIT, during which he worked on supersymmetry in Anti deSitter space, cellular automata models of thermal fluids, and the gravitational Aharanov-Bohm effect, and after the arrival of his family’s first baby, he abruptly switched to a more family-friendly field and became a systems engineer for AT&T Bell Labs. There he worked on network performance and routing: AT&T still uses his algorithms to route their CCS7 signaling, and other, networks (CCS7 is the nervous system of the long distance network). On attending a cool demo of neural networks reading handwritten digits, he switched fields again, and began his long descent into machine learning. He has worked on handwriting and machine print recognition (he worked on a system now used to read millions of checks daily, and he worked on zip code and handwritten address recognition for the USPS), support vector machines, audio fingerprinting (his work is currently used in XBox and Windows Media Player to identify music), speaker verification, information retrieval, and semantic modeling. His ranking algorithm is currently used in Bing for ranking web search results. Chris was program co-chair of Neural Information Processing Systems 2012 and general co-chair of NIPS 2013. His main current research interest is on modeling meaning in language.