Frontiers in Machine Learning 2020

Frontiers in Machine Learning 2020


The first virtual Frontiers in Machine Learning event took place from July 20-23, 2020.

This four-day virtual conference brought together academics, researchers, and PhD Students. The program was rich, engaging, and filled with current themes and research outcomes spanning theory and practice in Machine Learning. The agenda covered talks and discussions with Microsoft researchers and academic collaborators.

Agenda Overview

Date Time Program
Monday, July 20, 2020 9:00 AM–10:00 AM PDT Fireside Chat, Chris Bishop and Peter Lee
10:30 AM–12:00 PM PDT Machine Learning Conversations, a panel led by Susan Dumais
Tuesday, July 21, 2020 9:00 AM–12:30 PM PDT Security and Privacy in Machine Learning
1:00 PM–2:00 PM PDT Panel – Beyond Fairness: Pushing ML Frontiers for Social Equity
9:00 PM–10:30 PM PDT Causality and Machine Learning (special MSR India session)
Wednesday, July 22, 2020 9:00 AM–12:30 PM PDT Interpretability and Explanation
Thursday, July 23, 2020 9:00 AM–12:40 PM PDT Machine Learning Systems (topics include NLP and Climate Impact)
12:40 PM–12:45 PM PDT Closing Remarks

Program Committee

Vani Mandava, Sean Kuno, Kalika Bali, Debadeepta Dey, Christopher Bishop, Asli Celikyilmaz, Adam Trischler

MSR Events and Media

Sara Smith, Jen Viencek, Jeremy Crawford and RTE Media team

Microsoft’s Event Code of Conduct

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. This includes virtual events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location.

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft’s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct. In short, the entire experience must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report any concerns, harassing behavior, or suspicious or disruptive activity. Microsoft reserves the right to ask attendees to leave at any time at its sole discretion.

Monday, July 20

Monday, July 20, 2020

Time (PDT) Session Speaker(s)
9:00 AM-9:10 AM Welcome and Kick-Off Sandy Blyth, Managing Director
Microsoft Research Outreach
9:10 AM–10:00 AM Fireside Chat
Christopher Bishop, Technical Fellow and Lab Director
Microsoft Research Cambridge

Peter Lee, Corporate Vice President
Microsoft Research & Incubation

10:00 AM–10:30 AM BREAK
10:30 AM–12:00 PM Machine Learning Conversations
Susan Dumais, Technical Fellow & Managing Director
Microsoft Research New England, New York City and Montreal

Katja Hofmann, Principal Researcher
Microsoft Research Cambridge
Learning to Adapt: Advances in Deep Meta Reinforcement Learning

Akshay Krishnamurthy, Principal Researcher
Microsoft Research NYC
Generalization and Exploration in Reinforcement Learning

Asli Celikyilmaz, Principal Researcher
Microsoft Research AI
Modeling Discourse in Long-Text Generation

Dan Klein, Technical Fellow
Microsoft Semantics Machine Research
Conversational AI: A View from Semantic Machines

Tuesday, July 21

Tuesday, July 21, 2020

Theme: Security and Privacy in Machine Learning

Time (PDT) Session Speaker / Talk Title
9:00 AM–10:30 AM Accelerating Machine Learning with Confidential Computing

Session Leads: Alex Shamis, Microsoft and Stavros Volos, Microsoft

Session Abstract: In the recent years, Machine Learning (ML) has facilitated key applications, such as medical imaging, video analytics, and financial forecasting. Understanding the massive computing requirements of ML, cloud providers have been investing in accelerated ML computing and a range of ML services. A key concern in such systems, however, is the privacy of the sensitive data being analyzed and the confidentiality of the trained models. Confidential cloud computing provides a vehicle for privacy-preserving ML, enabling multiple entities to collaborate and train accurate models using sensitive data, and to serve these models with assurance that their data and models remain protected, even from privileged attackers. In this session, our speakers will demonstrate applications and advancements in Confidential ML: (i) how confidential computing hardware can accelerate multi-party and collaborative training, creating an incentive for data sharing; and (ii) how emerging cloud accelerator systems can be re-designed to deliver strong privacy guarantees, overcoming the limited performance of CPU-based confidential computing.

Antoine Delignat-Lavaud, Microsoft
Multi-party Machine Learning with Azure Confidential Computing

Raluca Ada Popa, University of California, Berkeley
Towards A Secure Collaborative Learning Platform

Emmett Witchel, University of Texas at Austin
Secure Computing with Cloud GPUs

10:30 AM–11:00 AM BREAK
11:00 AM–12:30 PM Security and Machine Learning

Session Lead: Emre Kiciman, Microsoft

Session Abstract: Machine learning has enabled many advances in processing visual, language, and other digital data signals and, as a result, is quickly becoming integrated in a variety of real-world systems with important societal and business purposes. However, as with any computer technology deployed at scale or in critical domains, ML systems face motivated adversaries who might wish to cause undesired behavior or violate security restrictions. In this session, participants will discuss the security challenges of today’s AI-driven systems and opportunities to mitigate adversarial attacks for more robust systems.

Aleksander Mądry, Massachusetts Institute of Technology
What Do Our Models Learn?

Dawn Song, University of California, Berkeley
AI & Security: Challenges, Lessons & Future Directions

Jerry Li, Microsoft
Algorithmic Aspects of Secure Machine Learning

Q&A panel with all 3 speakers

12:30 PM–1:00 PM BREAK
1:00 PM–2:00 PM Panel – Beyond Fairness: Pushing ML Frontiers for Social Equity

Moderator: Mary Gray, Microsoft

Session Abstract: At its core, machine learning is the artful science of statistically divining patterns from stores of data—typically, lots of data. Much of these data are drawn from sources as diverse as tweets and Creative Commons images to COVID-19 patient health records. Machine learning uses innovative techniques to draw what it can from the data on hand to push the boundaries of such problems as reliability and robustness in algorithmic modeling; theories and applications of causal inference; development of stable, predictive models from sparse data; uses of interpretable machine learning for course-correcting models that confound reason; and finding new ways to use noisy or sparse annotated training data to drive insights. While societal impact and social equity are relevant to the frontiers above, this panel asks: How might ML take up data and questions across a variety of domains such as education, development, discrimination, housing, health disparities, inequality in labor markets, to advance our understanding of systemic inequities and challenges? These systems, arguably, tacitly shape the data, theory, and methods core to ML. How might centering questions of social equity advance the frontiers of the field?

Rediet Abebe, University of California, Berkeley

Irene Lo, Stanford University

Augustin Chaintreau, Columbia University

9:00 PM–10:30 PM

(9:30 AM – 11:00 AM IST

Big Ideas in Causality and Machine Learning

Session Lead: Amit Sharma, Microsoft

Session Abstract: Causal relationships are stable across distribution shifts. Models based on causal knowledge have the potential to generalize to unseen domains and offer counterfactual predictions: how do outcomes change if a certain feature is changed in the real world. In recent years, machine learning methods based on causal reasoning have led to advances in out-of-domain generalization, fairness and explanation, and robustness to data selection biases. ¬ In this session, we discuss big ideas at the intersections of causal inference and machine learning towards building stable predictive models and discovering causal insights from data.

Special MSR India session

Susan Athey, Stanford University
Causal Inference, Consumer Choice, and the Value of Data

Elias Bareinboim, Columbia University
On the Causal Foundations of Artificial Intelligence (Explainability & Decision-Making)

Cheng Zhang, Microsoft
A causal view on Robustness of Neural Networks

Q&A panel with all 3 speakers

Wednesday, July 22

Wednesday, July 22, 2020

Theme: Interpretability and Explanation

Time (PDT) Session Title Speaker / Talk Title
9:00 AM–10:30 AM Machine Learning Reliability and Robustness

Session Lead: Besmira Nushi, Microsoft

Session Abstract: As Machine Learning (ML) systems are increasingly becoming part of user-facing applications, their reliability and robustness are key to building and maintaining trust with users and customers, especially for high-stake domains. While advances in learning are continuously improving model performance on expectation, there is an emergent need for identifying, understanding, and mitigating cases where models may fail in unexpected ways. This session is going to discuss ML reliability and robustness from both a theoretical and empirical perspective. In particular, the session will aim at summarizing important ongoing work that focuses on reliability guarantees but also on how such guarantees translate (or not) to real-world applications. Further, the talks and the panel will aim at discussing (1) properties of ML algorithms that make them more preferable than others from a reliability and robustness lens such as interpretability, consistency, transportability etc. and (2) tooling support that is needed for ML developers to check and build for reliable and robust ML. The discussion will be grounded on real-world applications of ML in vision and language tasks, healthcare, and decision making.

Thomas Dietterich, Oregon State University
Anomaly Detection in Machine Learning and Computer Vision

Ece Kamar, Microsoft
AI in the Open World: Discovering Blind Spots of AI

Suchi Saria, Johns Hopkins University
Implementing Safe & Reliable ML: 3 key areas of development

Q&A panel with all 3 speakers

10:30 AM–11:00 AM BREAK
11:00 AM–12:30 PM Saving Lives with Interpretable ML

Session Lead: Rich Caruana, Microsoft

Session Abstract: This session is about Saving Lives Using Interpretable Machine Learning in HealthCare. It’s critical to make sure healthcare models are safe to deploy. One challenge is that most patients are receiving treatment and that affects the data. A model might learn high blood pressure is good for you because the treatment given when you have blood pressure lowers risk compared to healthier patients with lower blood pressure. There are many ways confounding can cause models to predict crazy things. In the first presentation Rich Caruana will talk about problems that we see in healthcare data thanks to interpretable machine learning. In the second presentation, Ankur Teredesai from UW will talk about Fairness in Machine Learning for HealthCare. And in the last presentation Marzyeh Ghassemi from Toronto will talk about how Interpretable, Explainable, and Transparent AI can be Dangerous in HealthCare. Looks like an exciting lineup, so please join us!

Rich Caruana, Microsoft
Saving Lives with Interpretable Machine Learning

Ankur Teredesai, University of Washington
Fairness in Healthcare AI

Marzyeh Ghassemi, University of Toronto
Expl-AI-n Yourself: The False Hope of Explainable Machine Learning in Healthcare

Thursday, July 23

Thursday, July 23, 2020

Theme: Machine Learning Systems

Time (PDT) Session Title Speaker / Talk Title
9:00 AM–10:30 AM Learning from Limited Labeled Data: Challenges and Opportunities for NLP

Session Lead: Ahmed Hassan Awadallah, Microsoft

Session Abstract: Modern machine learning applications have enjoyed a great boost utilizing neural networks models, allowing them to achieve state-of-the-art results on a wide range of tasks. Such models, however, require large amounts of annotated data for training. In many real-world scenarios, such data is of limited availability making it difficult to translate these gains into real-world impact. Collecting large amounts of annotated data is often difficult or even infeasible due to the time and expense of labelling data and the private and personal nature of some of these datasets. This session will discuss several approaches to address the labelled data scarcity. In particular, the session will discuss work on: (1) transfer learning techniques that can transfer knowledge between different domains or languages to reduce the need for annotated data; (2) weakly-supervised learning where distant or heuristic supervision is derived from the data itself or other available metadata; (3) and techniques which learn from user interactions or other reward signals directly with techniques such as reinforcement learning. The discussion will be grounded on real-world applications where we aspire to bring AI experiences quickly and efficiently to everyone in more tasks, markets, languages, and domains.

Ahmed Hassan Awadallah, Microsoft
Bringing AI Experiences to Everyone

Marti Hearst, University of California, Berkeley
Summarization without the Summaries

Graham Neubig, Carnegie Mellon University
Lessons from the Long Tail: Methods for NLP in the Next 1,000 Languages

Alex Ratner, University of Washington
ML Development with Weak Supervision: Notes from the Field

Q&A panel with all 4 speakers

10:30 AM–11:00 AM BREAK
11:00 AM–12:40 PM Climate Impact of Machine Learning

Session Lead: Philip Rosenfield, Microsoft

Session Abstract: Microsoft has made an ambitious commitment to remove its carbon footprint in response to the overwhelming urgency of addressing climate change. Meanwhile, recent advances in machine learning (ML) models, such as transformer-based NLP, have produced substantial gains in accuracy at the cost of exceptionally large compute resources and, correspondingly, carbon emissions from energy consumption. Understanding and mitigating the climate impact of ML has emerged at the frontier of ML research, spanning multiple areas including hardware design, computational efficiency, and incentives for carbon efficiency.

The goal of this session is to identify priority areas to drive research agendas that are best-suited to efforts in academia, in industry, or in collaboration. We aim to inspire research advances and action, within both academia and industry, to improve the sustainability of machine learning hardware, software and frameworks.

Nicolo Fusi, Microsoft
Opening Remarks

Emma Strubell, Carnegie Mellon University
Learning to Live with BERT

Vivienne Sze, Massachusetts Institute of Technology
Reducing the Carbon Emissions of ML Computing – Challenges and Opportunities

Diana Marculescu, University of Texas at Austin
When Climate Meets Machine Learning: The Case for Hardware-ML Model Co-design

Q&A panel with all 4 speakers

12:40 PM–12:45 PM Closing Remarks Sandy Blyth, Managing Director
Microsoft Research Outreach

Vani Mandava, Director
Microsoft Research Outreach