Efficient Minimization of Risk Measures via Smoothing: Theory and Applications

  • Ankan Saha | University of Chicago

At the heart of most machine learning problems lies a regularized risk minimization problem. With the explosion of machine learning techniques and applications, it becomes imperative to come up with faster algorithms for optimizing these objectives. We develop a novel smoothing strategy motivated by Nesterov’s accelerated gradient descent methods which improves upon previous first order algorithms to get faster optimal rates of convergence of O(1/√ε) for various nonsmooth machine learning objectives including the binary linear SVMs, structured output prediction. Additionally we show that such smoothing helps us attack multivariate measures like ROC Score and Precision recall break-even point, which are not additive over the individual data points. We obtain orders of magnitude improvements in experimental results. We will also show connections of how these schemes can be used to get faster algorithms for certain computational geometry problems like finding minimum enclosing convex shapes.

Speaker Details

I am a fifth year graduate student at the University of Chicago. My interests are mostly in the intersection of convex optimization(smooth and nonsmooth), online learning and temporal information retrieval related problems. I am interested in developing faster optimization algorithms and have dabbled with analyzing convergence guarantees for existing algorithms in online learning, matrix factorization and other areas in compressed sensing and machine learning . I frequently collaborate with Prof. SVN Vishwanthan at Purdue University and Prof. Ambuj Tewari at U Mich. I am interning with the machine learning group this summers and am working with Asela Gunawardena on temporal instant answer generation from query logs

    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks