Scaling Up Reinforcement Learning


July 29, 2015


B. Ravindran


Indian Institute of Technology Madras


Distributed machine learning is an important area that has been receiving considerable attention from academic and industrial communities, as data is growing in unprecedented rate. In the first part of the talk, we review several popular approaches that are proposed/used to learn classifier models in the big data scenario. With commodity clusters priced on system configurations becoming popular, machine learning algorithms have to be aware of the computation and communication costs involved in order to be cost effective and efficient. In the second part of the talk, we focus on methods that address this problem; in particular, considering different data distribution settings (e.g., example and feature partitions), we present efficient distributed learning algorithms that trade-off computation and communication costs.


B. Ravindran

I am currently on sabbatical at the Department of Computer Science and Automation in the Indian Institute of Science, Bangalore.

I am an associate professor at the Department of Computer Science and Engineering at the Indian Institute of Technology Madras. I completed my Ph.D. at the Department of Computer Science, University of Massachusetts, Amherst. I worked with Prof. Andrew G. Barto on an algebraic framework for abstraction in Reinforcement Learning.

My current research interests span the broader area of machine learning, ranging from Spatio-temporal Abstractions in Reinforcement Learning to social network analysis and Data/Text Mining. Much of the work in my group is directed toward understanding interactions and learning from them.