Optimization methods are the engine of machine learning algorithms. Examples abound, such as training neural networks with stochastic gradient descent, segmenting images with submodular optimization, or efficiently searching a game tree with bandit algorithms. We aim to advance the mathematical foundations of both discrete and continuous optimization and to leverage these advances to develop new algorithms with a broad set of AI applications. Some of the current directions pursued by our members include convex optimization,…
I am a principal researcher at Microsoft Research, located in Redmond, WA. My current research interests include theory and algorithms for large-scale optimization, stochastic and online algorithms for machine learning, and parallel and distributed computing.
Established: February 17, 2016
PACORA (Performance-Aware Convex Optimization for Research Allocation) is a resource allocation framework for general-purpose operating and cloud systems, which is designed to provide responsiveness guarantees to a simultaneous mix of high-throughput parallel, interactive, and real-time applications in an efficient, scalable manner in order to improve efficiency without sacrificing responsiveness or performance.
An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk MinimizationQihang Lin, Zhaosong Lu, Lin Xiao, in SIAM Journal on Optimization, SIAM – Society for Industrial and Applied Mathematics, November 1, 2015,
An Accelerated Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk MinimizationQihang Lin, Zhaosong Lu, Lin Xiao, Microsoft Research, July 1, 2014,
An Adaptive Accelerated Proximal Gradient Method and Its Homotopy Continuation For Sparse OptimizationQihang Lin, Lin Xiao, in Proceedings of the 31st International Conference on Machine Learning (ICML), June 1, 2014,
A Voted Regularized Dual Averaging Method for Large-Scale Discriminative Training in Natural Language ProcessingJianfeng Gao, Tianbing Xu, Lin Xiao, Xiaodong He, September 1, 2013,
An Adaptive Accelerated Proximal Gradient Method and Its Homotopy Continuation For Sparse OptimizationQihang Lin, Lin Xiao, April 5, 2013,
New Convex Programs and Distributed Algorithms for Fisher Markets with Linear and Spending Constraint UtilitiesBenjamin Birnbaum, Nikhil R. Devanur, Lin Xiao, August 1, 2010,
The Fastest Mixing Markov Process on a Graph and a Connection to a Maximum Variance Unfolding ProblemJun Sun, Stephen Boyd, Lin Xiao, Persi Diaconis, Society for Industrial and Applied Mathematics, November 1, 2006,
Proximal-Gradient Homotopy Method for Sparse Least Squares