Natural Logic and Alignment in Natural Language Inference


October 8, 2008


Bill MacCartney


Stanford University


This is a two-part talk. In the first part, we propose an approach to natural language inference (NLI) based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. We evaluate our system on the FraCaS test suite, and achieve a 27% reduction in error from previous work. We also show that hybridizing an existing RTE system with our natural logic system yields significant gains on the RTE3 test suite.

In the second part, we address the problem of alignment—that is, establishing links between corresponding phrases in two related sentences—which is as important in NLI as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.


Bill MacCartney

Bill MacCartney is a doctoral candidate in Computer Science at Stanford University. He is advised by Prof. Chris Manning, and is affiliated with the Stanford NLP Group and the Stanford AI Lab. His research concerns probabilistic approaches to natural language inference (NLI), and he has played a major role in the development of Stanford’s Recognizing Textual Entailment (RTE) system. His recent work has focused on (1) the development a computational model of natural logic, and (2) the problem of alignment for NLI. His paper on the former topic received the Springer Best Paper Award at the 22nd International Conference on Computational Linguistics (Coling-08). MacCartney graduated summa cum laude with a B.A. in Philosophy from Princeton University. He later became a Fulbright Scholar, a bond and derivatives trader at D. E. Shaw & Co., and co-founder of a Softbank-funded internet startup.