On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

  • Michael Jordan | UC Berkeley

Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several recent, related results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion.

Speaker Details

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Statistics and the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences. Prof. Jordan is a member of the National Academy of Sciences, the National Academy of Engineering and the American Academy of Arts and Sciences. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received the IJCAI Research
Excellence Award in 2016 and the David E. Rumelhart Prize in 2015.

Series: MSR AI Distinguished Lectures and Fireside Chats