I am a Principal Researcher at MSR New York City. I also hold an (unpaid) Adjunct Associate Professor position at University of Maryland at College Park. Before joining MSR-NYC in 2013, I was a member of MSR Silicon Valley since 2007. I received my Ph.D. in Computer Science from Cornell University in 2006, advised by Jon Kleinberg.
Full publication list (with abstracts, by year and by topic).
My research interests are in algorithms and theoretical computer science, spanning machine learning theory, algorithmic economics, and networks. Across various domains, I am drawn to algorithmic problems with informational constraints. I am particularly interested in online machine learning and exploration-exploitation tradeoff, and their manifestations in mechanism design and human computation. Much of my earlier research was on the analysis of Internet and social networks, metric embeddings, and distance/routing data structures. My work has been recognized with the best paper award at ACM EC 2010, best paper nomination at WWW 2015, and the best student paper award at ACM PODC 2005.
“Projects” at MSR
Real-world Reinforcement Learning
Multiworld Testing: a methodology and a system for contextual bandits.
Explore-exploit learning at MSR-NYC
Multi-armed bandits at MSR-SV (inactive since the closure of MSR-SV)
Former interns: Mahsa Derakhshan (2019), Karthik Abinav Sankararaman (2018), Jieming Mao (2018), Manish Raghavan (2017), Mathias Lecuyer (2017), Steven Wu (2015, 2016), Chien-Ju Ho (2013, 2014), Ashwinkumar Badanidiyuru (2012, 2013), Sigal Oren (2011), Shiri Chechik (2010), Yogeshwer Sharma (2008).
Full publication list can be found here (with abstracts, by year and by topic).
Incentivizing High Quality Crowdwork
Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan
WWW 2015: 24th Intl. World Wide Web Conference (Nominee for Best Paper Award).
Short version: SIGecom Exchanges, Dec 2015.
Online Decision Making in Crowdsourcing Markets: Theoretical Challenges
[position paper & survey]
Aleksandrs Slivkins and Jennifer Wortman Vaughan
SIGecom Exchanges, Dec 2013.
Bandits with Knapsacks (rev. 2017)
Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs Slivkins.
FOCS 2013: IEEE Symp. on Foundations of Computer Science.
J. of the ACM, Vol. 65 Issue 3, March 2018.
Low-distortion Inference of Latent Similarities from a Multiplex Social Network
Ittai Abraham, Shiri Chechik, David Kempe and Aleksandrs Slivkins.
SIAM J. on Computing, Vol. 44(3), 2015.
SODA 2013: ACM-SIAM Symp. on Discrete Algorithms.
Truthful Mechanisms with Implicit Payment Computation
Moshe Babaioff, Robert Kleinberg and Aleksandrs Slivkins.
J. of the ACM, Volume 62, Issue 2, May 2015.
EC 2010: ACM Symp. on Electronic Commerce (Best Paper Award).
Meridian: A Lightweight Network Location Service without Virtual Coordinates (project)
Bernard Wong, Aleksandrs Slivkins and Emin G. Sirer.
ACM SIGCOMM 2005.
Distance Estimation and Object Location via Rings of Neighbors
PODC 2005: ACM Symp. on Principles of Distributed Computing
(Best Student Paper Award).
Special issue of “Distributed Computing”: Vol. 19, No. 4. (March 2007).
Metric Embeddings with Relaxed Guarantees
T-H.H. Chan, K. Dhamdhere, A. Gupta, J. Kleinberg and A. Slivkins.
SIAM J. on Computing, 38(6): 2303-2329, March 2009.
FOCS 2005: IEEE Symp. on Foundations of Computer Science.
Triangulation and Embedding using Small Sets of Beacons
Jon Kleinberg, Aleksandrs Slivkins and Tom Wexler.
J. of the ACM, 56(6), Sept 2009.
FOCS 2004: IEEE Symp. on Foundations of Computer Science.
Ph.D. Thesis: Embedding, Distance Estimation and Object Location in Networks (2006).
Abstract: Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory, textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical introduction and a review of the more advanced results.
The chapters are as follows:
- Stochastic bandits
- Lower bounds
- Bayesian Bandits and Thompson Sampling
- Lipschitz Bandits
- Full Feedback and Adversarial Costs
- Adversarial Bandits
- Linear Costs and Semi-bandits
- Contextual Bandits
- Bandits and Zero-Sum Games
- Bandits with Knapsacks
- Incentivized Exploration and Connections to Mechanism Design
Status of the manuscript: essentially complete (modulo some polishing), except for last chapter, which
I plan to add over the next few months.
Some recent program committees:
SCUGC 2015: 5th Workshop on Social Computing and User Generated Content at EC’15.
NYCE Day 2014: 7th annual New York Computer Science and Economics Day.
SCUGC 2014: 4th Workshop on Social Computing and User Generated Content at EC’14.
Advanced Topics in Theory of Computing: Bandits, Experts, and Games (Fall 2016)
Computer Science Department, University of Maryland at College Park
Bandits and Reinforcement Learning (Fall 2017, co-taught with Alekh Agrawal)
Computer Science Department, Columbia University
Tutorial on Dynamic Pricing under Model Uncertainty: Learning & Earning
Joint with Assaf Zeevi. Presented at ACM EC 2015 (tutorial proposal).