Large Margin Generative Models

Date

May 10, 2004

Speaker

Tony Jebara

Affiliation

Columbia University

Overview

Generative models such as Bayesian networks, exponential family distributions and mixtures are elegant formalisms to setup and specify prior knowledge about a learning problem. However, the standard stimation methods they rely on, including maximum likelihood and Bayesian integration do not focus the modeling resources on a particular input-output task. In applied settings when models are imperfectly matched to the real data, discriminative learning is crucial for improving performance with such models. We consider classifiers built from the log-likelihood ratio of generative models and find parameters for these models such that the resulting discrimination boundary has a large margin. Through maximum entropy discrimination, we show how all exponential family models can be estimated with large margin using convex programming. Furthermore, we consider interesting latent models such as mixture models and hidden
Markov models where the additional presence of latent variables makes large margin estimation difficult. We propose a variant of the maximum entropy discrimination method that uses variational bounding on classification constraints to make computations tractable in the latent case. The method finds large margin settings reliably by iteratively interleaving standard expectation steps with large margin maximization information projection steps. Interestingly, the method gives rise to Lagrange multipliers that behave like posteriors over hidden variables. Preliminary experiments are shown.

Speakers

Tony Jebara

Tony Jebara is an Assistant Professor of Computer Science at Columbia University. He is Director of the Columbia Machine Learning Laboratory whose research focuses upon machine learning, computer vision and related application areas such as human-computer interaction. Jebara is also a Principal Investigator at Columbia’s Vision and Graphics Center. He has published over 30 papers in the above areas including the book Machine Learning: Discriminative and Generative (Kluwer). Jebara is the recipient of the Career award from the National Science Foundation and has also recieved honors for his papers from the International Conference on Machine Learning and from the Pattern Recognition Society. He has served as co-chair and program committee member for various conferences and workshops. Jebara’s research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (Wired Online, Scientific American, Newsweek, Science Photo Library, etc.). Jebara obtained his Bachelor’s from McGill University (at the McGill Center for Intelligent Machines) in 1996. He obtained his Master’s in 1998 and his PhD in 2002 both from the Massachusetts Institute of Technology (at the MIT Media Laboratory). He is currently a member of the IEEE, ACM and AAAI. Professor Jebara’s research and laboratory are supported in part by Alpha Star Corporation and the National Science Foundation.