Introduction to large-scale optimization – Part 2


July 23, 2015


Suvrit Sra


Max Planck Institute for Intelligent Systems


These lectures will cover both basics as well as cutting-edge topics in large-scale convex and nonconvex optimization (continuous case only). Examples include stochastic convex optimization, variance reduced stochastic gradient, coordinate descent methods, proximal-methods, operator splitting techniques, and more. The lectures will also cover relevant mathematical background, as well as some pointers to interesting directions of future research.


Suvrit Sra

Suvrit Sra is a Research Scientist at the Max Planck Institute for Intelligent Systems (formerly Biological Cybernetics) in Tübingen, Germany. He obtained his M.S. and Ph.D. in Computer Science from the University of Texas at Austin in 2007, and a B.E. (Hons.) in Computer Science from BITS, Pilani (India) in 1999. His main research focus is on large-scale optimization (convex, nonconvex, deterministic, stochastic, etc.): most notably for applications in machine learning, scientific computing, and computational statistics. He takes avid interest in various flavors of analysis, especially convex, harmonic, and matrix.

His research has won awards at several international venues; the most recent being the “SIAM Outstanding Paper Prize (2011)” for his work on metric nearness. He regularly organizes the Neural Information Processing Systems (NIPS) workshops on “Optimization for Machine Learning”.