Microsoft @ NIPS 2017

Microsoft @ NIPS 2017

About

Microsoft is excited to be a Platinum sponsor of the thirty-first annual conference on Neural Information Processing Systems (NIPS). Over 80 of our researchers are involved in spotlight sessions, presentations, symposiums, posters, accepted papers, and workshops at NIPS (see schedule below). Stop by our booth (#315, Exhibit Hall B) to see HoloLens and Windows Mixed Reality in action, as well as to find out about career opportunities at Microsoft and enter for your chance to win an Xbox One X gaming console. Follow @MSFTResearch for the latest information coming from the event.

NIPS organizing committee

Hanna Wallach, Program Co-chair
Jenn Wortman Vaughan, Tutorial chair
Markus Weimer, Demonstration and competition chair

Workshop organizers

Evelyne Viegas, Machine Learning Challenges as a Research Tool
Nicolo Fusi, Machine Learning in Computational Biology
Siddhartha Sen, ML Systems Workshop
Alekh Agarwal, OPT 2017: Optimization for Machine Learning
Jennifer Wortman Vaughan, Learning in the Presence of Strategic Behavior
Manik Varma, Extreme Classification: Multi-class & Multi-label Learning in Extremely Large Label Spaces
Vasilis Syrgkanis, Learning in the Presence of Strategic Behavior

Symposium organizers

Patrice Simard, Interpretable Machine Learning
Rich Caruana, Interpretable Machine Learning

Invited speaker

December 5 @ 1:50–2:40 PM | Kate Crawford, The Trouble with Bias

Careers at Microsoft information session

December 6 @ 12:30–1:20 PM | Eric Horvitz, Christopher Bishop, Jennifer Chayes, and Mir Rosenberg

Spotlight sessions

December 5 @ 3:30–3:35 PM | Clustering Billions of Reads for DNA Data Storage
Cyrus Rashtchian, Konstantin Makarychev, Luis Ceze, Karin Strauss, Sergey Yekhanin, Djordje Jevdjic, Miklos Racz, and Siena Ang

December 6 @ 11:20–11:25 AM | Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues
Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, and Amir Yehudayoff

December 6 @ 5:15–5:20 PM | Repeated Inverse Reinforcement Learning
Kareem Amin, Nan Jiang, and Satinder Singh

Oral presentations

December 5 @ 10:55–11:00 AM | Robust Optimization for Non-Convex Objectives
Yaron Singer, Robert S Chen, Vasilis Syrgkanis, and Brendan Lucier

December 6 @ 4:20–4:35 PM | Off-policy evaluation for slate recommendation
Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miro Dudik, John Langford, Damien Jose, and Imed Zitouni

Accepted Papers

A Decomposition of Forecast Error in Prediction Markets” by Miro Dudik, Sebastien Lahaie, Ryan M Rogers, and Jennifer Wortman Vaughan

A Highly Efficient Gradient Boosting Decision Tree” by Guolin Ke, Qi Meng, Taifeng Wang, Wei Chen, Weidong Ma, and Tie-Yan Liu

A Sample Complexity Measure with Applications to Learning Optimal Auctions” by Vasilis Syrgkanis

Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM” by Steven Wu, Bo Waggoner, Seth Neel, Aaron Roth, and Katrina Ligett

Adversarial Ranking for Language Generation” by Dianqi Li, Kevin Lin, Xiaodong He, Ming-ting Sun, and Zhengyou Zhang

Clustering Billions of Reads for DNA Data Storage” by Cyrus Rashtchian, Konstantin Makarychev, Luis Ceze, Karin Strauss, Sergey Yekhanin, Djordje Jevdjic, Miklos Racz, and Siena Ang

Collecting Telemetry Data Privately” by Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin

Consistent Robust Regression” by Kush Bhatia, Prateek Jain, and Purushottam Kar

Decoding with Value Networks for Neural Machine Translation” by Di He, Hanqing Lu, Yingce Xia, Tao Qin, Liwei Wang, and Tieyan Liu

Deliberation Networks: Sequence Generation Beyond One-Pass Decoding” by Yingce Xia, Lijun Wu, Jianxin Lin, Fei Tian, Tao Qin, and Tie-Yan Liu

Efficiency Guarantees from Data” by Darrell Hoy, Tremor Technologies; Denis Nekipelov, University of Virginia; and Vasilis Syrgkanis, Microsoft Research

Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach” by Emmanouil Platanios, Carnegie Mellon University; Hoifung Poon, Microsoft Research; Tom M. Mitchell, Carnegie Mellon University; and Eric J. Horvitz, Microsoft Research

From Bayesian Sparsity to Gated Recurrent Nets” by Hao He, Massachusetts Institute of Technology; Bo Xin, Microsoft Research; and David Wipf, Microsoft Research

Hybrid Reward Architecture for Reinforcement Learning” by Harm Van Seijen, Microsoft Research; Romain Laroche, Microsoft Research, Maluuba; Mehdi Fatemi, Microsoft Research; and Joshua Romoff, McGill University

Identifying Outlier Arms in Multi-Armed Bandit” by Honglei Zhuang, University of Illinois; Chi Wang, Microsoft Research; and Yifan Wang, Tsinghua University

Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications” by Qinshi Wang and Wei Chen

Inference in Graphical Models via Semidefinite Programming Hierarchies” by Murat Erdogdu, Yash Deshpande, and Andrea Montanari

Influence Maximization with ε-Almost Submodular Threshold Function” by Qiang Li, Institute of Computing Technol; Wei Chen, Microsoft Research; Xiaoming Sun, Institute of Computing Technology, Chinese Academy of Sciences; and Jialin Zhang, Institute of Computing Technology, Chinese Academy of Sciences

Large-Scale Quadratically Constrained Quadratic Program via Low-Discrepancy Sequences” by Kinjal Basu, Ankan Saha, and Shaunak Chatterjee, LinkedIn Corporation

Learning Mixture of Gaussians with Streaming Data” by Aditi Raghunathan, Stanford University; Prateek Jain, Microsoft Research; and Ravishankar Krishnawamy, Microsoft Research

Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls” by Zeyuan Allen-Zhu, Microsoft Research; Elad Hazan, Princeton University; Wei Hu, Princeton University; Yuanzhi Li, Princeton University

The Importance of Communities for Learning to Influence” by Eric Balkanski, Harvard University; Nicole Immorlica, Microsoft Research; and Yaron Singer, Harvard University

Mean Field Residual Networks: On the Edge of Chaos” by Greg Yang, Microsoft Research; Samuel S. Schoenholz, Google Brain

Multi-Task Learning for Contextual Bandits” by Aniket Anand Deshmukh, University of Michigan, Ann Arbor; Urun Dogan, Microsoft; and Clay Scott, University of Michigan

Neural Program Meta-Induction” by Jacob Devlin, Microsoft Research; Rudy R Bunel, Oxford University; Rishabh Singh, Microsoft Research; Matthew Hausknecht, Microsoft Research; and Pushmeet Kohli, DeepMind

Non-convex Robust PCA” by Praneeth Netrapalli, Microsoft Research; Niranjan Uma Naresh, UC Irvine; Sujay Sanghavi, UT-Austin; Animashree Anadkumar, UC-Irvine; Prateek Jain, Microsoft Research

Off-policy Evaluation for Slate Recommendation” by Adith Swaminathan, Microsoft Research; Akshay Krishnamurthy, University of Massachusetts; Alekh Agarwal, Microsoft Research; Miro Dudik, Microsoft Research; John Langford, Microsoft Research; Damien Jose, Microsoft; and Imed Zitouni, Microsoft Research

Online Learning with a Hint” by Ofer Dekel, Microsoft Research; Arthur Flajolet, Massachusetts Institute of Technology; Nika Haghtalab, Carnegie Mellon University; and Patrick Jaillet, Massachusetts Institute of Technology

Plan, Attend, Generate: Planning for Sequence-to-Sequence Models” by Caglar Gulcehre, Deepmind; Francis Dutil, MILA; Adam Trischler, Microsoft; and Yoshua Bengio, University of Montreal

Q-LDA: Uncovering Latent Patterns in Text-based Sequential Decision Processes” by Jianshu Chen, Microsoft Research; Chong Wang, Princeton University; Lin Xiao, Microsoft Research; Ji He, University Washington; Lihong Li, Microsoft Research; and Li Deng, Citadel

QSGD: Communication-Efficient Stochastic Gradient Descent, with Applications to Neural Networks” by Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic

Repeated Inverse Reinforcement Learning” by Kareem Amin, Google Research; Nan Jiang, Microsoft Research; and Satinder Singh, University of Michigan

Robust Estimation of Neural Signals in Calcium Imaging” by Hakan Inan, Stanford University; Murat Erdogdu, Microsoft Research; and Mark Schnitzer, Stanford University

Robust Optimization for Non-Convex Objectives” by Yaron Singer, Harvard University; Robert S Chen, Harvard University; Vasilis Syrgkanis, Microsoft Research; and Brendan Lucier, Microsoft Research

Stabilizing Training of Generative Adversarial Networks through Regularization” by Kevin Roth, ETH; Aurelien Lucchi, ETH Zurich; Sebastian Nowozin, Microsoft Research; and Thomas Hofmann, ETH Zurich

Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues” by Noga Alon, Tel Aviv University; Moshe Babaioff, Microsoft Research; Yannai A. Gonczarowski, The Hebrew University of Jerusalem and Microsoft Research; Yishay Mansour, Tel Aviv University; Shay Moran, IAS, Princeton; and Amir Yehudayoff, Technion – Israel Institute of Technology

The Numerics of GANs” by Lars Mescheder, Max-Planck Institute Tuebingen; Sebastian Nowozin, Microsoft Research; and Andreas Geiger, MPI Tübingen

Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation” by Christian Borgs, Microsoft Research; Jennifer Chayes, Microsoft Research; Christina Lee, Microsoft Research; and Devavrat Shah, Massachusetts Institute of Technology

Unsupervised Sequence Classification using Sequential Output Statistics” by Yu Liu, SUNY Buffalo; Jianshu Chen, Microsoft Research; and Li Deng, Citadel

Z-Forcing: Training Stochastic Recurrent Networks” by Marc-Alexandre Côté, Microsoft Research; Alessandro Sordoni, Microsoft Research, Maluuba; Anirudh Goyal, Université de Montréal; Nan Ke, MILA, École Polytechnique de Montréal; and Yoshua Bengio, University of Montreal

Posters

A Decomposition of Forecast Error in Prediction Markets
Miro Dudik (Microsoft Research), Jennifer Wortman Vaughan (Microsoft Research)

A Highly Efficient Gradient Boosting Decision Tree
Guolin Ke (Microsoft Research), Taifeng Wang (Microsoft Research), Wei Chen (Microsoft Research), Weidong Ma (Microsoft Research), Tie-Yan Liu (Microsoft Research)

A Sample Complexity Measure with Applications to Learning Optimal Auctions
Vasilis Syrgkanis (Microsoft Research)

Adversarial Ranking for Language Generation
Xiaodong He (Microsoft Research), Zhengyou Zhang (Microsoft Research)

Clustering Billions of Reads for DNA Data Storage
Luis Ceze, Karin Strauss (Microsoft Research), Sergey Yekhanin (Microsoft Research), Djordje Jevdjic (Microsoft Research), Siena Ang (Microsoft), Konstantin Makarychev (Microsoft)

Collecting Telemetry Data Privately
Bolin Ding (Microsoft Research), Janardhan Kulkarni (Microsoft Research), Sergey Yekhanin (Microsoft Research)

Communication-Efficient Stochastic Gradient Descent, with Applications to Neural Networks
Ryota Tomioka (Microsoft Research)

Consistent Robust Regression
Prateek Jain (Microsoft Research)

Decoding with Value Networks for Neural Machine Translation
Di He, Tao Qin (Microsoft Research), Tieyan Liu (Microsoft Research)

Deliberation Networks: Sequence Generation Beyond One-Pass Decoding
Jianxin Lin, Fei Tian (Microsoft Research), Tao Qin (Microsoft Research), Tie-Yan Liu (Microsoft Research)

Efficiency Guarantees from Data
Vasilis Syrgkanis (Microsoft Research)

Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach
Hoifung Poon (Microsoft Research), Eric Horvitz (Microsoft Research)

From Bayesian Sparsity to Gated Recurrent Nets
David Wipf (Microsoft Research)

Hybrid Reward Architecture for Reinforcement Learning
Harm Van Seijen (Microsoft Research), Mehdi Fatemi (Microsoft Research)

Identifying Outlier Arms in Multi-Armed Bandit
Chi Wang (Microsoft Research)

Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications
Wei Chen (Microsoft Research)

Influence Maximization with ε-Almost Submodular Threshold Function
Wei Chen (Microsoft Research)

Large-Scale Quadratically Constrained Quadratic Program via Low-Discrepancy Sequences
Kinjal Basu (LinkedIn Corporation), Ankan Saha (LinkedIn Corporation), Shaunak Chatterjee (LinkedIn Corporation)

Learning Mixture of Gaussians with Streaming Data
Prateek Jain (Microsoft Research)

Neural Program Meta-Induction
Jacob Devlin (Microsoft Research), Rishabh Singh (Microsoft Research), Matthew Hausknecht (Microsoft Research)

Off-policy Evaluation for Slate Recommendation
Adith Swaminathan (Microsoft Research), Alekh Agarwal (Microsoft Research), Miro Dudik (Microsoft Research), Damien Jose (Microsoft), Imed Zitouni (Microsoft Research)

Online Learning with a Hint
Ofer Dekel (Microsoft Research)

Plan, Attend, Generate: Planning for Sequence-to-Sequence Models
Adam Trischler (Microsoft)

Q-LDA: Uncovering Latent Patterns in Text-based Sequential Decision Processes
Jianshu Chen (Microsoft Research), Lin Xiao (Microsoft Research)

Robust Optimization for Non-Convex Objectives
Vasilis Syrgkanis (Microsoft Research), Brendan Lucier (Microsoft Research)

Stabilizing Training of Generative Adversarial Networks through Regularization
Sebastian Nowozin (Microsoft Research)

Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues
Moshe Babaioff (Microsoft Research)

The Numerics of GANs
Sebastian Nowozin (Microsoft Research)

Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation
Christian Borgs (Microsoft Research), Jennifer Chayes (Microsoft Research)

Unsupervised Sequence Classification using Sequential Output Statistics
Jianshu Chen (Microsoft Research)

Z-Forcing: Training Stochastic Recurrent Networks
Marc-Alexandre Côté (Microsoft Research), Alessandro Sordoni (Microsoft Research, Maluuba)

Workshops

NIPS 2017 Workshops

From “What if?” To “What Next?”: Causal Inference and Machine Learning for Intelligent Decision Making

Friday, December 8 @ 9:00 AM–6:30 PM | Hall C | Adith Swaminathan, Microsoft Research

This workshop is aimed at facilitating more interactions between researchers in machine learning and causal inference. In particular, it is an opportunity to bring together highly technical individuals who are strongly motivated by the practical importance and real-world impact of their work. Cultivating such interactions will lead to the development of theory, methodology, and – most importantly – practical tools, that better target causal questions across different domains.

In particular, we will highlight theory, algorithms and applications on automatic decision making systems, such as recommendation engines, medical decision systems and self-driving cars, as both producers and users of data. The challenge here is the feedback between learning from data and then taking actions that may affect what data will be made available for future learning. Learning algorithms have to reason about how changes to the system will affect future data, giving rise to challenging counterfactual and causal reasoning issues that the learning algorithm has to account for. Modern and scalable policy learning algorithms also require operating with non-experimental data, such as logged user interaction data where users click ads suggested by recommender systems trained on historical user clicks.

To further bring the community together around the use of such interaction data, this workshop will host a Kaggle challenge problem based on the first real-world dataset of logged contextual bandit feedback with non-uniform action-selection propensities. The dataset consists of several gigabytes of data from an ad placement system, which we have processed into multiple well-defined learning problems of increasing complexity, feedback signal, and context. Participants in the challenge problem will be able to discuss their results at the workshop.

Machine Learning and Computer Security

Friday, December 8 @ 9:00 AM–5:00 PM | Hyatt Hotel, Shoreline | Donald Brinkman, Microsoft Research

While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g., sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.

Conversational AI – today’s practice and tomorrow’s potential

Friday, December 8 @ 8:00 AM–7:00 PM | Grand Ballroom B | Jason Williams, Microsoft Research

This workshop will include invited talks from academia and industry, contributed work, and open discussion. In these talks, senior technical leaders from many of the most popular conversational services will give insights into real usage and challenges at scale. An open call for papers will be issued, and we will prioritize forward-looking papers that propose interesting and impactful contributions. We will end the day with an open discussion, including a panel consisting of academic and industrial researchers.

Interpreting, Explaining and Visualizing Deep Learning…now what?

Saturday, December 9 @ 8:15 AM–6:30 PM | Hyatt Regency Ballroom | Hamid Palangi, Qiuyuan Huang, Paul Smolensky, and Xiaodong He, Microsoft Research

Our NIPS 2017 Workshop “Interpreting, Explaining and Visualizing Deep Learning – Now what?” aims to review recent techniques and establish new theoretical foundations for interpreting and understanding deep learning models. However, it will not stop at the methodological level, but also address the “now what?” question. This strong focus on the applications of interpretable methods in deep learning distinguishes this workshop from previous events as we aim to take the next step by exploring and extending the practical usefulness of Interpreting, Explaining and Visualizing in Deep Learning. Also with this workshop we aim to identify new fields of applications for interpretable deep learning. Since the workshop will host invited speakers from various application domains (computer vision, NLP, neuroscience, medicine), it will provide an opportunity for participants to learn from each other and initiate new interdisciplinary collaborations. The workshop will contain invited research talks, short methods and applications talks, a poster and demonstration session and a panel discussion. A selection of accepted papers together with the invited contributions will be published in an edited book by Springer LNCS in order to provide a representative overview of recent activities in this emerging research field.

Co-located workshops

Women in Machine Learning

Monday, December 4 & Thursday, December 7 @ 2:00 PM–2:30 PM | Room 104 | 12th Women in Machine Learning Workshop (WiML 2017), by Hanna Wallach, Microsoft Research

The annual Women in Machine Learning Workshop is the flagship event of Women in Machine Learning. This technical workshop gives female faculty, research scientists, and graduate students in the machine learning community an opportunity to meet, network and exchange ideas, participate in career-focused panel discussions with senior women in industry and academia and learn from each other. Underrepresented minorities and undergraduates interested in machine learning research are encouraged to attend. We welcome all genders; however, any formal presentations, i.e. talks and posters, are given by women. We strive to create an atmosphere in which participants feel comfortable to engage in technical and career-related conversations.

Black in AI

Friday, December 8 @ 1:30 PM–5:30 PM | Black in AI Workshop @ NIPS 2017, by Timnit Gebru, Microsoft Research

The first Black in AI event will be co-located with NIPS 2017. The goal is to gather people in the field to share ideas and discuss initiatives to increase the presence of Black people in the field of artificial intelligence, for both diversity and data bias prevention purposes. At this workshop, Black researchers in AI will also have the opportunity to present their work during our oral and poster sessions.