Submodular Optimization and Machine Learning – Part 2


July 29, 2015


Stefanie Jegelka




Many problems in machine learning that involve discrete structures or subset selection may be phrased in the language of submodular set functions. The property of submodularity, also referred to as a ‘discrete analog of convexity’, expresses the notion of diminishing marginal returns, and captures combinatorial versions of rank and dependence. Submodular functions occur in a variety of areas including graph theory, information theory, combinatorial optimization, stochastic processes and game theory. In machine learning, they emerge in different forms as the potential functions of graphical models, as the utility functions in active learning and sensing, in models of diversity, in structured sparse estimation or network inference. The lectures will give an introduction to the theory of submodular functions, some applications in machine learning and algorithms for minimizing and maximizing submodular functions that exploit ties to both convexity and concavity.


Stefanie Jegelka

Stefanie Jegelka is a postdoctoral researcher at UC Berkeley, supervised by Michael I. Jordan and Trevor Darrell. She received a Ph.D. in Computer Science from ETH Zurich in 2012, in collaboration with the Max Planck Institute for Intelligent Systems. She completed her studies for a Diploma in Bioinformatics with distinction at the University of Tuebingen (Germany) and the University of Texas at Austin. She was a fellow of the German National Academic Foundation (Studienstiftung) and its scientific college for life sciences, and has received a Google Anita Borg Europe Fellowship and an ICML Best Paper Award. She has also been a research visitor at Georgetown University Medical Center and Microsoft Research and has held tutorials and workshops on submodularity in machine learning.