Translingual Document Representations from Discriminative Projections

  • John Platt ,
  • Kristina Toutanova ,
  • Scott Wen-tau Yih

Empirical Methods in Natural Language Processing (EMNLP-2010) |

Published by Association for Computational Linguistics

Representing documents by vectors that are independent of language enhances machine translation and multilingual text categorization. We use discriminative training to create a projection of documents from multiple languages into a single translingual vector space. We explore two variants to create these projections: Oriented Principal Component Analysis (OPCA) and Coupled Probabilistic Latent Semantic Analysis (CPLSA). Both of these variants start with a basic model of documents (PCA and PLSA). Each model is then made discriminative by encouraging comparable document pairs to have similar vector representations. We evaluate these algorithms on two tasks: parallel document retrieval for Wikipedia and Europarl documents, and cross-lingual text classification on Reuters. The two discriminative variants, OPCA and CPLSA, significantly outperform their corresponding baselines. The largest differences in performance are observed on the task of retrieval when the documents are only comparable and not parallel. The OPCA method is shown to perform best.

Publication Downloads

Data Set of English-Spanish Term Vectors from Wikipedia

August 8, 2011

This data set consists of the term vectors extracted from 60,730 Wikipedia English articles and their comparable Spanish articles, sampled in 2009. We used this data set to test various models for creating translingual document representations, work published in [Platt et al. EMNLP-2010] and [Yih et al. CoNLL-2011]. More detail of this data set can be found in the ReadMe file.