Microsoft Research AI Breakthroughs 2019

Microsoft Research AI Breakthroughs 2019

About

Please join us at Microsoft Research’s invitation-only AI Breakthroughs workshop. This is a unique event dedicated to discussing the most exciting research breakthroughs in AI with, you, the best and brightest PhD students and Postdoc researchers.

This two-day workshop in Redmond, Washington, provides a great opportunity to meet and discuss the future of AI with other top students and Microsoft researchers and to learn about the research we’re doing in AI.

Microsoft’s Event Code of Conduct

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. This includes events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location.

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft’s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct. In short, the entire experience at the venue must meet our culture standards.

We encourage everyone to assist in creating a welcoming and safe environment. Please report any concerns, harassing behavior, or suspicious or disruptive activity to venue staff, the event host or owner, or event staff.

Microsoft reserves the right to refuse admittance to or remove any person from company-sponsored events at any time in its sole discretion.

Agenda

Sunday, September 15

Time (PDT) Session
6:00 PM–8:00 PM Dinner Reception at the W Hotel Bellevue

Monday, September 16

Time (PDT) Session
8:30 AM–9:30 AM Breakfast + meet your nominator
9:30 AM–10:00 AM Opening – Eric Horvitz
10:00 AM–10:25 AM Deep Learning at MSR-AI – Jianfeng Gao
10:25 AM–10:50 AM Principled Approaches to Robust Machine Learning – Jerry Li
10:50 AM–11:15 AM Guidelines for Human-AI Interaction – Saleema Amershi
11:15 AM–11:40 AM Neuro-Symbolic Reasoning and Procedural AI – Alex Polozov
11:40 AM–12:05 PM Multi-Device Digital Assistance –
Rob Sim, Adam Fourney, and Elnaz Nouri
12:05 PM–1:00 PM Lunch provided
1:00 PM–1:25 PM New Frontiers in Reinforcement Learning – Patrick MacAlpine
1:25 PM–1:50 PM Foundations for Situated Integrative AI – Sean Andrist
1:50 PM–2:15 PM Information and Data Sciences at MSR-AI – Paul Bennett
2:15 PM–2:40 PM Language and Information Technologies at MSR-AI – Ahmed Awadallah
2:40 PM–3:00 PM Break
3:00 PM–5:00 PM Poster Session
6:00 PM–9:00 PM Space Needle Dinner

Tuesday, September 17

Time (PDT) Session
9:00 AM–2:00 PM Group Sessions

*Agenda subject to change

Abstracts

Deep Learning at MSR-AI

Speaker: Jianfeng Gao

We advance the state of the art deep learning technologies for natural language processing, vision language understanding and dialogue. Our ongoing projects include universal language embedding, neuro-symbolic computing, vision language navigation, image captioning/generation, convlab, conversation learner etc.

Principled Approaches to Robust Machine Learning

Speaker: Jerry Li

The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years. As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important. In this talk, I’ll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, federated learning, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.

Guidelines for Human-AI Interaction

Speaker: Saleema Amershi

Microsoft recently released 18 Guidelines for Human-AI Interaction. These guidelines represent best practices for responsible and human-centered AI design based on 20 years’ worth of research, synthesized and tested according to a rigorous validation process. This talk will introduce the guidelines along with concrete examples, and present implications and opportunities for future research.

Neuro-Symbolic Reasoning and Procedural AI

Speaker: Alex Polozov

Procedural intelligence is the technique of using programs as a way to specify procedural knowledge for AI agents. It combines the complementary strengths of neural and symbolic AI technologies to solve reasoning tasks. Neural technologies (i.e. deep learning) excel at distilling patterns from large quantities of semi-supervised data and building soft representations of perceptual signals such as images, speech, and natural language. In contrast, symbolic technologies (i.e. formal methods) excel at systematic logical reasoning about formally specified structures, constraint satisfaction, and task compositionality. In this talk, I will describe a few recent projects of the Procedural Intelligence group that apply the combined neuro-symbolic reasoning to question answering, program synthesis, software engineering, and data science. We will see how it allows us to guarantee that a model’s predictions satisfy user-specified constraints, helps the models switch between high-level and low-level abstract reasoning, and empowers the new generation of AI-assisted software development.

Multi-Device Digital Assistance

Speakers: Rob Sim, Adam Fourney, and Elnaz Nouri

The use of multiple digital devices to support people’s daily activities has long been discussed. The majority of U.S. residents own multiple electronic devices, such as smartphones, smart wearable devices, tablets, and desktop, or laptop computers. Multi-device experiences (MDXs) spanning multiple devices simultaneously are viable for many individuals. Each device has unique strengths in aspects such as display, compute, portability, sensing, communications, and input. Despite the potential to utilize the portfolio of devices at their disposal, people typically use just one device per task; meaning they may need to make compromises in the tasks they attempt or may underperform at the task at hand. It also means the support that digital assistants such as Amazon Alexa, Google Assistant, or Microsoft Cortana can offer is limited to what is possible on the current device. The rise of cloud services, coupled with increased ownership of multiple devices, creates opportunities for digital assistants to provide improved task completion guidance. Our work explores the space of task support using multiple devices. In this talk we will outline our platform enabling researchers to set up new MDX scenarios, and explore two use cases leveraging the platform to enable novel experiences incorporating devices with complementary capabilities.

New Frontiers in Reinforcement Learning

Speaker: Patrick MacAlpine

This talk provides a snapshot of a small sample of ongoing Reinforcement Learning (RL) projects across Microsoft Research AI. We’ll cover projects that grapple with foundational problems in RL, prototype RL in cyber-physical systems, and study RL in simulated environments.

Foundations for Situated Integrative AI

Speaker: Sean Andrist

In this talk, I will introduce a research effort at MSR we call “Situated Interaction,” in which we strive to design and develop intelligent technologies that can reason deeply about their surroundings and engage in fluid interaction with people in physically and socially situated settings. Our research group has developed a number of situated interactive systems for long-term in-the-wild deployment, including smart elevators, virtual agent receptionists, and directions-giving robots, and we have encountered a host of fascinating and unforeseen challenges along the way. I will discuss research challenges of understanding engagement, turn-taking, proxemics, and F-formations, along with systems-level challenges inherent to building, deploying, and maintaining physically situated interactive technologies in the wild.

Information and Data Sciences at MSR-AI

Speaker: Paul Bennett

The Information and Data Science group is at the forefront of research in understanding and modeling people’s behavior as they interact with software and services. Our key missions are to advance the state-of-the-art in how we: (1) connect people to information to achieve their goals; (2) understand the implications of information use on society. We focus in application areas related to personalization, attention and focus, and social good, and we advance horizontal technology related to: (i) large-scale and deep learning; (ii) counterfactual and causal reasoning; (iii) interaction techniques; (iv) productivity measurement; (v) data bias. We consider the holistic process of building intelligent applications with an emphasis on understanding traces of human behavior from interactions with digital systems, and we collaborate closely with partners in research, product groups, and externally to move from insights to impact. In this talk, we will briefly overview progress in four key areas: The Future of Mobile Productivity, Robust Causal Reasoning, Next Generation Search, and the Personal Web.

Language and Information Technologies at MSR-AI

Speaker: Ahmed Awadallah

Modern NLP applications have enjoyed a great boost utilizing neural networks models. Such models, however, require large amounts of annotated data for training. In many real-world scenarios, such data is of limited availability due to the private nature of the data or the high cost of manual annotations. In this talk, I will cover some of the work happening in the Language & Information Technologies team on using transfer learning, weak supervision and user behavior modeling to build scalable and efficient models while reducing the reliance on manually annotated training datasets. The Language and Information Technologies (LIT) team in MSR AI works on harnessing AI to understand how people interact with information and develop new methods for effective and scalable language understanding and information management.