Learning and stochastic optimization with non-i.i.d. data
- Alekh Agarwal | UC Berkeley
ABSTRACT:
We study learning and optimization scenarios where the samples we receive do not obey the frequently made i.i.d. assumption, but are coupled over time. We show that as long as the samples come from a suitably mixing process, i.e. the dependence weakens over time, a large class of learning algorithms continue to enjoy good generalization guarantees. The result also has implications for stochastic optimization with non-i.i.d. samples. Specifically, we show that a large class of suitably stable online learning algorithms produce a predictor with a small optimization error, as long as the samples are from a suitably ergodic process. Our mixing assumptions are satisfied by finite-state Markov chains, autoregressive processes, certain infinite and continuous state Markov chains, and various queuing processes. The talk will discuss applications to machine learning with non-i.i.d. data samples, optimization over high-dimensional and combinatorial spaces, and distributed optimization as a few examples.
Based on joint work with John Duchi, Michael Jordan and Mikael Johansson.
Speaker Details
Alekh Agarwal is a fifth year PhD student at UC Berkeley, jointly advised by Peter Bartlett and Martin Wainwright. Alekh has received PhD fellowships from Microsoft Research and Google. His main research interests are in the areas of machine learning, convex optimization, high-dimensional statistics, distributed machine learning and understanding the computational
-
-
Alekh Agarwal
Principal Research Manager
-
Jeff Running
-
Watch Next
-
-
Accelerating MRI image reconstruction with Tyger
- Karen Easterbrook,
- Ilyana Rosenberg
-
-
-
-
-
-
-
-