Optimization methods are the engine of machine learning algorithms. Examples abound, such as training neural networks with stochastic gradient descent, segmenting images with submodular optimization, or efficiently searching a game tree with bandit algorithms. We aim to advance the mathematical foundations of both discrete and continuous optimization and to leverage these advances to develop new algorithms with a broad set of AI applications. Some of the current directions pursued by our members include convex optimization,…
I was co-general chair for COLT 2013, COLT 2014, and I am/was on the program committee for NIPS 2012, NIPS 2014, NIPS 2016, COLT 2013, COLT 2014, COLT 2015, COLT 2016, ICML 2015, ICML 2016, ALT 2013, ALT 2014. I am also on the steering committee for COLT.
Currently I’m especially interested in (i) the interplay between convexity and randomness in optimization, and (ii) inference problems on random graphs.
My research interests include: machine learning, convex optimization, multi-armed bandits, statistical network analysis, random graphs and random matrices, and applications of information theory to learning, optimization, and probability.
NEW: polynomial-time algorithm for bandit convex optimization
From July 2014 to July 2016 with various co-authors at MSR we dedicated a lot of energy to bandit convex optimization. The end product is a new efficient algorithm. To learn more about it I recommend to first take a look at this [youtube video], then these 3 blog posts ([part 1], [part 2], [part 3]), and finally [the paper] itself.
Nearest Neighbor Clustering: A Baseline Method for Consistent Clustering with Arbitrary Objective FunctionsSébastien Bubeck, Ulrike von Luxburg, in Journal of Machine Learning Research, March 1, 2009,
July 13, 2017
James R. Lee
University of Washington
July 12, 2017
University of Washington
November 10, 2016
Machine Learning Algorithms Workshop: Logarithmic Time Online Multiclass Prediction & Log-Concave Sampling with SGD
November 19, 2015
John Langford and Sebastien Bubeck
MSR Redmond, Microsoft
March 10, 2015
Thomas Rothvoss, Sebastien Bubeck, Rishabh K Iyer, and Ran Gilad-Bachrach
University of Washington, Microsoft Research
October 22, 2014
Sebastien Bubeck and Nick Gravin
January 24, 2014
November 20, 2013
- 2014 – Present: Researcher, Microsoft Research, Theory Group (Redmond, WA, USA).
- 2011 – 2014: Assistant Professor, Princeton University,Department of Operations Research and Financial Engineering (Princeton, NJ, USA).
- Fall 2013: Visiting Scientist, Simons Institute, UC Berkeley (Berkeley, CA, USA).
- 2010 – 2011: Postdoc, Centre de Recerca Matemàtica (Barcelona, Spain).
- 2007 – 2010: Ph.D student (speciality: Applied Mathematics), INRIA Nord Europe (Lille, France).
- 2008 – 2010: Teaching assistant at the University of Lille 1 (Lille, France).
- July-August 2006: RIPS student (Research in Industrial Projects for Students),Institute for Pure and Applied Mathematics, UCLA (Los Angeles, CA, USA).
- 2005 – 2008: Student at the Ecole Normale Supérieure de Cachan (Cachan, France).
For a more detailed curriculum vitae, see my resume.
- Best Paper Award at COLT (Conference on Learning Theory) 2016.
- 2015 Alfred P. Sloan Research Fellow in Computer Science.
- Second prize for the best French Ph.D in Artificial Intelligence (AI prize 2011).
- Jacques Neveu prize 2010 for the best French Ph.D in Probability/Statistics.
- Second prize for the best French Ph.D in Computer Science (Gilles Kahn prize 2010).
- Best Student Paper Award at COLT (Conference on Learning Theory) 2009
- I’m a bandit. Random topics on optimization, probability, and statistics.
Convex Optimization: Algorithms and Complexity
In Foundations and Trends in Machine Learning, Vol. 8: No. 3-4, pp 231-357, 2015
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
S. Bubeck and N. Cesa-Bianchi
In Foundations and Trends in Machine Learning, Vol 5: No 1, 1-122, 2012
Interns in the Theory Group at Microsoft Research
- Fan Wei, graduate student at Stanford with Jacob Fox.
- Shirshendu Ganguly, graduate student at UW with Ioana Dumitriu and Christopher Hoffman.
- Ewain Gwynne, graduate student at MIT with Scott Sheffield.
- Yin-Tat Lee, graduate student at MIT with Jonathan Kelner.
- Miklos Racz, former graduate student at UC Berkeley with Elchanan Mossel, now postdoc in the Theory Group at MSR.
Undergraduate advising at Princeton University
- Horia Mania, graduate student at UC Berkeley with Ben Recht.
- Billy Fang, graduate student at UC Berkeley.
- Christian Fong, graduate student at Stanford.
- Tengyao Wang, graduate student at Cambridge with Richard Samworth.
Graduate students at Princeton University
- Che-Yu Liu, ORFE
- Kernel-based method for bandit convex optimization [youtube].
- New Results at the Crossroads of Convexity, Learning and Information Theory [ENS video].
- Revisiting Nesterov’s Acceleration [IMA video].
- Convex bandits (two lectures, one by myself and one by Ronen Eldan) [CIMI videos].
- Log-concave sampling with projected langevin monte carlo [IMA video].
- Entropic barrier [IMA video] (short version: [COLT 2015 videolectures.net]).
- Influence of the seed in uniform attachment [MSR video] (preliminary results on the influence of the seed in PA [youtube]).
- Linear bandits [MSR video].
- Most correlated arms identification [COLT 14 videolectures.net].
- Multiple best arms identification and optimal discovery with probabilistic expert advice [MSR video]. Same talk a year before at ICML 2012: [techtalks video].
- 0th order stochastic Lipschitz optimization [youtube].
- Clique number in random geometric graph [youtube].
- Best of both worlds in bandits [NIPS 13 videolectures.net].
- Bounded regret in multi-armed bandits [COLT 13 videolectures.net].
- Towards minimax policies for linear bandits [COLT 12 techtalks].
- Combinatorial bandits [COLT 11 videolectures.net].
Tutorial on Bandits Games
In the recent years the multi-armed bandit problem has attracted a lot of attention in the theoretical learning community. This growing interest is a consequence of the large number of problems that can be modeled as a multi-armed bandit: ad placement, website optimization, packet routing, ect. Furthermore the bandit methodology is also used as a building block for more complicated scenarios such as reinforcement learning, model selection in statistics, or computer game-playing. While the basic stochastic multi-armed bandit can be traced back to Thompson (1933) and Robbins (1952), it is only very recently that we obtained an (almost) complete understanding of this simple model. Moreover many extensions of the original problem have been proposed in the past fifteen years, such as bandits without a stochastic assumption (the so-called adversarial model), or bandits with a very large (but structured) set of arms.
This tutorial will cover in details the state-of-the-art for the basic multi-armed bandit problem (both stochastic and adversarial), and the information theoretic analysis of Bayesian bandit problems. We will also touch upon contextual bandits, as well as the case of very large (possibly infinite) set of arms with linearconvexLipschitz losses.
Old version of the slides