Keynote Talk: Chris Burges (MSR). Do we really need machines to comprehend? – and, two datasets for machine comprehension
The last few years have seen major advances in AI: in web search, computer vision, speech, and machine translation, all achieved without solving the problem of machine comprehension, and in fact with pretty much no reference to the term “AI”. Is it the case that larger datasets, faster computers, and cleverer algorithms will provide all we’ll ever need to solve most problems we’d like to solve, without recourse to the deep semantic modeling of language? The second part of my title reveals my own stance on this, but it is always a good exercise to ask hard questions and consider the simplest possible approaches first. In the second part of the talk I will describe two datasets that we recently created to help researchers attack the problem of the domain independent machine comprehension of language.
Afternoon Talks 1:
14:30 ConVis: A Visual Text Analytic System for Exploring Asynchronous Online Conversations, Enamul Hoque and Giuseppe Carenini
14:50 Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation, Majid Razmara, Maryam Siahbani, Gholamreza Haffari and Anoop Sarkar