Faculty Summit 2017: The Edge of AI

Faculty Summit 2017: The Edge of AI

About

Faculty Summit 2017: The Edge of AI

The 18th annual Microsoft Research Faculty Summit was held in Redmond, WA on July 17 and 18, 2017 and consisted of a variety of keynotes, talks, panels, and technologies focused on Artificial Intelligence (AI) research: The Edge of AI.

Microsoft AI researchers are striving to create intelligent machines that complement human reasoning and enrich human experiences and capabilities. At the core, is the ability to harness the explosion of digital data and computational power with advanced algorithms that extend the ability for machines to learn, reason, sense, and understand—enabling collaborative and natural interactions between machines and humans.

We are seeing widespread investments in AI which are advancing the state of the art in machine intelligence and perception, enabling computers to interpret what they see, to communicate in natural language, to answer complex questions, and to interact with their environment. In addition to technological advances, researchers and thought leaders need to be concerned with the ethics and societal impact of intelligent technologies.

The Microsoft Research Faculty Summit 2017 brought together thought leaders and researchers from a broad range of disciplines including computer science, social sciences, human design and interactions, and policy. Together we highlighted some of the key challenges posed by artificial intelligence and identified the next generation of approaches, techniques, and tools that will be needed to develop AI to solve the world’s most pressing challenges.

Focus Areas

We explored the following areas:

  • Machine learning – Developing and improving algorithms that help computers learn from data to create more advanced, intelligent computer systems.
  • Human language technologies – Linking language to the world through speech recognition, language modeling, language understanding, and dialog systems.
  • Perception and sensing – Creating computers and devices which understand what they see to enable tasks ranging from autonomous driving to analysis of medical images.
  • AI, people, and society – Examining the societal and individual impacts on the spread of intelligent technologies to formulate best practices for their design.
  • Systems, tools and platforms – Integrating intelligent technologies to create interactive tools such as chatbots that incorporate contextual data to augment and enrich human reasoning.
  • Integrative intelligence – Weaving together advances in AI from disciplines such as computer vision and human language technologies to create end-to-end systems that learn from data and experience.
  • Cyber-physical systems and robotics – Developing methods to ensure the integrity of drones, robots and other intelligent technologies that interact with the physical world.
  • Human AI collaboration – Harnessing research breakthroughs in artificial intelligence to design technologies that allow humans to interact with computers in novel, meaningful and productive ways.
  • Decisions and planning – Reasoning about future events to enable informed collaborations between humans and intelligent agents.

Program Chairs

Agenda

Sunday, July 16

Time (PDT) Session Location
4:00 PM–7:00 PM
Welcome Reception and Registration Desk Open
Hilton Bellevue Hotel

Monday, July 17

Time (PDT) Session Speaker Location
7:30 AM–8:30 AM
Breakfast
McKinley
8:30 AM–8:40 AM
Welcome
Christopher Bishop, Microsoft
Evelyne Viegas, Microsoft
Kodiak
8:40 AM–9:10 AM
AI in the Open World
[Full Video]
Chair: Christopher Bishop, Microsoft
Speaker:
  • Eric Horvitz, Microsoft | slides
Kodiak
9:10 AM–10:10 AM
Smart Enough to Work With Us? Foundations and Challenges for Teamwork-Enabled AI Systems
[Full Video]
Chair: Susan Dumais, Microsoft
Speaker:
  • Barbara J. Grosz, Harvard University | slides
Kodiak
10:10 AM–10:40 AM
Technology Showcase – Lightning Round
Chair: Roy Zimmermann, Microsoft Kodiak
10:40 AM–11:00 AM
Break
11:00 AM–12:30 PM
Machine Reading Using Neural Machines
[Video Abstract | Full Video]
Chair: Lucy Vanderwende, Microsoft
Speakers:
  • Isabelle Augenstein, University College London | slides
  • Jianfeng Gao, Microsoft | slides
  • Percy Liang, Stanford University | slides
  • Rangan Majumder, Microsoft | slides
Cascade
Integrative-AI
[Video Abstract | Full Video]
Chair: Sean Andrist, Microsoft
Speakers:
  • Dan Bohus, Microsoft | slides
  • Louis-Philippe Morency, Carnegie Mellon University | slides
  • Besmira Nushi, Microsoft | slides
Rainier
AI for Accessibility: Augmenting Sensory Capabilities with Intelligent Technology
[Video Abstract | Full Video]
Chair: Meredith Ringel Morris, Microsoft | slides
Speakers:
  • Jeffrey Bigham, Carnegie Mellon University | slides
  • Shaun Kane, University of Colorado Boulder | slides
  • Walter Lasecki, University of Michigan | slides
St. Helens
12:30 PM–1:15 PM
Lunch
McKinley
1:15 PM–3:15 PM
Technology Showcase
Chair: Roy Zimmermann, Microsoft
3:15 PM–4:45 PM Conversational Systems in the Era of Deep Learning and Big Data
[Video Abstract | Full Video]
Chair: Bill Dolan, Microsoft
Speakers:
  • Jackie Chi Kit Cheung, McGill University | slides
  • Michel Galley, Microsoft | slides
  • Ian Lane, Carnegie Mellon University | slides
  • Alan Ritter, Ohio State University | slides
  • Lucy Vanderwende, Microsoft | slides
  • Jason Williams, Microsoft | slides
Cascade
From Visual Sensing to Visual Intelligence
[Video Abstract | Full Video]
Chair: Gang Hua, Microsoft
Speakers:
  • Rama Chellappa, University of Maryland | slides
  • Katsu Ikeuchi, Microsoft | slides
  • Song-Chun Zhu, University of California, Los Angeles
Rainier
Learnings from Human Perception
[Video Abstract | Full Video]
Chair: Mar Gonzalez-Franco, Microsoft | slides
Speakers:
  • Olaf Blanke, Ecole Polytechnqiue de Lausanne | slides
  • Mel Slater, University of Barcelona | slides
  • Ana Tajadura-Jiménez, Universidad Loyola Andalucía & University College London | slides
St. Helens
5:00 PM–5:45 PM
Fireside Chat
[Full Video]
Chair: Sandy Blyth, Microsoft
Speakers:
  • Christopher Bishop, Microsoft
  • Harry Shum, Microsoft
Kodiak
5:45 PM–6:30 PM
Travel to Dinner
 
6:30 PM–9:00 PM Dinner at The Golf Club at Newcastle
 

Tuesday, July 18

Time (PDT) Session Speaker Location
7:30 AM–8:30 AM
Breakfast
McKinley
8:30 AM–9:30 AM The Interplay of Agent and Market Design
[Full Video]
Chair: Kori Inkpen, Microsoft
Speaker:
  • Amy Greenwald, Brown University | slides
Kodiak
9:45 AM–11:15 AM
Provable Algorithms for ML/AI Problems
[Video Abstract | Full Video]
Chair: Prateek Jain, Microsoft
Speakers:
  • Sham Kakade, University of Washington | slides
  • Ravi Kannan, Microsoft | slides
  • Santosh Vempala, Georgia Institute of Technology | slides
Cascade
Private AI
[Video Abstract | Full Video]
Chair: Ran Gilad-Bachrach, Microsoft
Speakers:
  • Rich Caruana, Microsoft | slides
  • Jung Hee Cheon, Seoul National University | slides
  • Kristin Lauter, Microsoft | slides
Rainier
AI for Earth
[Video Abstract | Full Video]
Chair: Lucas Joppa, Microsoft
Speakers:
  • Tanya Berger-Wolf, University of Illinois at Chicago | slides
  • Carla Gomes, Cornell University
  • Milind Tambe, University of Southern California | slides
St. Helens
11:30 AM–1:00 PM Microsoft Cognitive Toolkit (CNTK) for Deep Learning
[Video Abstract | Full Video]
Chair: Chris Basoglu, Microsoft | slides
Speakers:
  • Sayan Pathak, Microsoft | slides
  • Yanmin Qian, Shanghai Jiaotong University |
    slides
  • Cha Zhang, Microsoft | slides
Cascade
AI and Security
[Video Abstract | Full Video]
Chair: David Molnar, Microsoft | slides
Speakers:
  • Taesoo Kim, Georgia Institute of Technology | slides
  • Dawn Song, University of California, Berkeley | slides
  • Michael Walker, Microsoft | slides
Rainier
Social and Emotional Intelligence in AI and Agents
[Full Video]
Moderator: Mary Czerwinski, Microsoft | slides
Panelists:
  • Justine Cassell, Carnegie Mellon University | slides
  • Jonathan Gratch, University of Southern California | slides
  • Daniel McDuff, Microsoft | slides
  • Louis-Philippe Morency, Carnegie Mellon University | slides
St. Helens
1:00 PM–2:00 PM
Lunch
2:00 PM–3:30 PM Transforming Machine Learning and Optimization through Quantum Computing
[Video Abstract | Full Video]
Chair: Krysta Svore, Microsoft
Speakers:
  • Helmut Katzgraber, Texas A&M | slides
  • Matthias Troyer, Microsoft | slides
  • Nathan Wiebe, Microsoft | slides
Cascade
Challenges and Opportunities in Human-Machine Partnership
[Video Abstract | Full Video]
Chair: Ece Kamar, Microsoft
Speakers:
  • Eric Horvitz, Microsoft
  • Subbarao Kambhampati, Arizona State University | slides
  • Milind Tambe, University of Southern California | slides
Rainier
Towards Socio-Culturally Aware AI
[Video Abstract]
Chair: Kalika Bali, Microsoft | slides
Speakers:
  • Cristian Danescu-Niculescu-Mizil, Cornell University | slides
  • Daniel McDuff, Microsoft | slides
  • Christopher Potts, Stanford University | slides
St. Helens
3:45 PM–4:45 PM AI, People, and Society
[Full Video]
Moderator: Eric Horvitz, Microsoft
Panelists:
  • Solon Barocas, Microsoft
  • Carla Gomes, Cornell University
  • Percy Liang, Stanford University
  • Gireeja Ranade, Microsoft
Kodiak
4:45 PM–5:45 PM Model-Based Machine Learning
[Full Video]
Chair: Eric Horvitz, Microsoft
Speaker:
  • Christopher Bishop, Microsoft | slides
Kodiak
5:45 PM–6:00 PM Closing Remarks
Speakers:
  • Christopher Bishop, Microsoft
  • Evelyne Viegas, Microsoft
  • Roy Zimmermann, Microsoft
Kodiak

Abstracts

Monday, July 17

AI in the Open World

Speaker: Eric Horvitz, Microsoft

[Full Video]

Fielding AI solutions in the open world requires systems to grapple with incompleteness and uncertainty. This session addresses several promising areas of research in open world AI, including enhancing robustness via leveraging algorithmic portfolios, learning from experiences in rich simulation environments, harnessing approaches to transfer learning, and learning and personalization from small training sets. In addition, this session covers mechanisms for engaging people to identify and address uncertainties, failures, and blind spots in AI systems.

Smart Enough to Work With Us? Foundations and Challenges for Teamwork-Enabled AI Systems

Speaker: Barbara J. Grosz, Harvard University

[Full Video]

For much of its history, AI research has aimed toward building intelligent machines independently of their interactions with people. As the world of computing has evolved, and systems pervade ever more facets of life, the challenges of building computer systems smart enough to work effectively with people, in groups as well as individually, has become increasingly important. Furthermore, recent advances in AI-capable systems raise societal and ethical questions about the effects of such systems on people and societies at large. In this talk, Barbara argues that the ability to work with is essential for truly intelligent behavior, identifies fundamental scientific questions this teamwork requirement raises, describes research by her group on computational models of collaboration and their use in supporting health-care coordination, and briefly discusses ethical challenges AI-capable systems pose, along with approaches to those challenges.

Machine Reading Using Neural Machines

Speakers: Isabelle Augenstein, University College London; Jianfeng Gao, Microsoft; Percy Liang, Stanford University; Rangan Majumder, Microsoft

[Video Abstract | Full Video]

Teaching machines to read, process and comprehend natural language documents and images is a coveted goal in modern AI. We see growing interest in machine reading comprehension (MRC) due to potential industrial applications as well as technological advances, especially in deep learning and the availability of various MRC datasets that can benchmark different MRC systems. Despite the progress, many fundamental questions remain unanswered: Is question answer (QA) the proper task to test whether a machine can read? What is the right QA dataset to evaluate the reading capability of a machine? For speech recognition, the switchboard dataset was a research goal for 20 years – why is there such a proliferation of datasets for machine reading? How important is model interpretability and how can it be measured? This session brings together experts at the intersection of deep learning and natural language processing to explore these topics.

Integrative-AI

Speakers: Dan Bohus, Microsoft; Louis-Philippe Morency, Carnegie Mellon University; Besmira Nushi, Microsoft

[Video Abstract | Full Video]

Over the last decade, algorithmic developments coupled with increased computation and data resources have led to advances in well-defined verticals of AI such as vision, speech recognition, natural language processing, and dialog technologies. However, the science of engineering larger, integrated systems that are efficient, robust, transparent, and maintainable is still very much in its infancy. Efforts to develop end-to-end intelligent systems that encapsulate multiple competencies and act in the open world have brought into focus new research challenges. Making progress towards this goal requires bringing together expertise from AI and systems, and this progress can be sped up with shared best practices, tools and platforms. This session highlights opportunities and challenges for research and development for integrative AI systems. The speakers address various aspects of integrative AI systems, from multimodal learning and troubleshooting to development through shared platforms.

AI for Accessibility: Augmenting Sensory Capabilities with Intelligent Technology

Speakers: Jeffrey Bigham, Carnegie Mellon University; Shaun Kane, University of Colorado Boulder; Walter Lasecki, University of Michigan

[Video Abstract | Full Video]

Advances in AI technologies have important ramifications for the development of accessible technologies. These technologies can augment the capabilities of people with sensory disabilities, enabling new and empowering experiences. In this session, we present examples of how breakthroughs in AI can support key tasks for diverse user populations. Examples of such applications include image labeling on behalf of people with visual impairments, fast audio captioning for people who are hard-of-hearing, and better word prediction for people who rely on communication augmentation tools to speak.

Conversational Systems in the Era of Deep Learning and Big Data

Speakers: Jackie Chi Kit Cheung, McGill University; Michel Galley, Microsoft; Ian Lane, Carnegie Mellon University; Alan Ritter, Ohio State University; Lucy Vanderwende, Microsoft; Jason Williams, Microsoft

[Video Abstract | Full Video]

Recent research in recurrent neural models, combined with the availability of massive amounts of dialog data, have together spurred the development of a new generation of conversational systems. Where past approaches focused on task-oriented dialog and relied on a pipeline of modules (e.g., language understanding, state tracking, etc.), new techniques learn end-to-end models trained exclusively on massive text transcripts of conversations. While promising, these new methods raise important questions: how can neural models go beyond chat-style dialog and interface with structured domain knowledge and programmatic APIs? How can these techniques be applied in domains where there is no existing dialog data? What new system behaviors are possible with these techniques and resources? This session brings together experts at the intersection of deep learning and conversational systems to explore these topics through their on-going work and expectations for the future.

From Visual Sensing to Visual Intelligence

Speakers: Rama Chellappa, University of Maryland; Katsu Ikeuchi, Microsoft; Song-Chun Zhu, University of California, Los Angeles

[Video Abstract | Full Video]

Computer vision is arguably one of the most challenging subfields of AI. To better address the key challenges, the vision research community has long been branched off from the general AI community and focused on its core problems. In recent years, we have witnessed tremendous progress in visual sensing due to big data and more powerful learning machines. However, we still lack a holistic view of how visual sensing relates to more general intelligence. This session brings researchers together to discuss research trends in computer vision, the role of visual sensing in more integrated general intelligence systems, and how visual sensing systems will interact with other sensing modalities from a computational sense.

Learnings from Human Perception

Speakers: Olaf Blanke, Ecole Polytechnqiue de Lausanne; Mel Slater, University of Barcelona; Ana Tajadura-Jiménez, Universidad Loyola Andalucía & University College London

[Video Abstract | Full Video]

Scientists have long explored the different sensory inputs to better understand how humans perceive the world and control their bodies. Many of the great discoveries about the human perceptual system were first found through laboratory experiments that stimulated inbound sensory inputs as well outbound sensory predictions. These aspects of cognitive neuroscience have important implications when building technologies, as we learn to transfer abilities that are natural to humans to leverage the strengths of machines. Machines can also be used to learn further about human perception, because technology allows scientists to reproduce impossible events and observe how humans would respond and adapt to those events. This loop from human to machine and back again can help transfer what we learn from our evolutionary intelligence to future machines and AI. This session addresses progress and challenges in applying human perception to machines, and vice versa.

Tuesday, July 18

The Interplay of Agent and Market Design

Speaker: Amy Greenwald, Brown University

[Full Video]

Humans make hundreds of routine decisions daily. More often than not, the impact of our decisions depends on the decisions of others. As AI progresses, we are offloading more and more of these decisions to artificial agents. This research is aimed at building AI agents that make effective decisions in multiagent—part human, part artificial—environments. Current efforts are relevant to economic domains, mostly in the service of perfecting market designs. This talk covers AI agent design in applications ranging from renewable energy markets and online ad exchanges to wireless spectrum auctions.

Provable Algorithms for ML/AI Problems

Speakers: Sham Kakade, University of Washington; Ravi Kannan, Microsoft; Santosh Vempala, Georgia Institute of Technology

[Video Abstract | Full Video]

Machine learning (ML) has demonstrated success in various domains such as web search, ads, computer vision, natural language processing (NLP), and more. These success stories have led to a big focus on democratizing ML and building robust systems that can be applied to a variety of domains, problems, and data sizes. However, due many times to poor understanding of typical ML algorithms, an expert tries a lot of hit-and-miss efforts to get the system working, thus limiting the types and applications of ML systems. Hence, designing provable and rigorous algorithms is critical to the success of such large-scale, general-purpose ML systems. The goal of this session was to bring together researchers from various communities (ML, algorithms, optimization, statistics, and more) along with researchers from more applied ML communities such as computer vision and NLP, with the intent of understanding challenges involved in designing end-to-end robust, rigorous, and predictable ML systems.

Private AI

Speakers: Rich Caruana, Microsoft; Jung Hee Cheon, Seoul National University; Kristin Lauter, Microsoft

[Video Abstract | Full Video]

As the volume of data goes up, the quality of machine learning models, predictions, and services will improve. Once models are trained, predictive cloud services can be built on them, but users who want to take advantage of the services have serious privacy concerns about exposing consumer and enterprise data—such as private health or financial data—with machine learning services running in the cloud. Recent developments in cryptography provide tools to build and enable “Private AI,” including private predictive services that do not expose user data to the model owner, and that also provide the means to train powerful models across several private datasets that can be shared only in encrypted form. This session examines the state of the art for these tools, and discusses important directions for the future of Private AI.

AI for Earth

Speakers: Tanya Berger-Wolf, University of Illinois at Chicago; Carla Gomes, Cornell University; Milind Tambe, University of Southern California

[Video Abstract | Full Video]

Human society is faced with an unprecedented challenge to mitigate and adapt to changing climates, ensure resilient water supplies, sustainably feed a population of 10 billion, and stem a catastrophic loss of biodiversity. Time is too short, and resources too thin, to achieve these outcomes without the exponential power and assistance of AI. Early efforts are encouraging, but current solutions are typically one-off attempts that require significant engineering beyond what’s available from the AI research community. In this session we explore, in collaboration with the Computational Sustainability Network (a twice-funded National Science Foundation (NSF) Expedition) the latest applications of AI research to sustainability challenges, as well as ways to streamline environmental applications of AI so they can work with traditional academic programs. The speakers in this session set the scene for the state of the art in AI for Earth research and frame the agenda for the next generation of AI applications.

Microsoft Cognitive Toolkit (CNTK) for Deep Learning

Speakers: Sayan Pathak, Microsoft; Yanmin Qian, Shanghai Jiaotong University; Cha Zhang, Microsoft

[Video Abstract | Full Video]

Microsoft Cognitive Toolkit (CNTK) is a production-grade, open-source, deep-learning library. In the spirit of democratizing AI tools, CNTK embraces fully open development, is available on GitHub, and provides support for both Windows and Linux. The recent 2.0 release (currently in release candidate) packs in several enhancements—most notably Python/C++ API support, easy-to-onboard tutorials (as Python notebooks) and examples, and an easy-to-use Layers interface. These enhancements, combined with unparalleled scalability on NVIDIA hardware, were demonstrated by both NVIDIA at SuperComputing 2016 and Cray at NIPS 2016. These enhancements from the CNTK supported Microsoft in its recent breakthrough in speech recognition, reaching human parity in conversational speech. The toolkit is used in all kinds of deep learning, including image, video, speech, and text data. The speakers discuss the current features of the toolkit’s release and its application to deep learning projects.

AI and Security

Speakers: Taesoo Kim, Georgia Institute of Technology; Dawn Song, University of California-Berkeley; Michael Walker, Defense Advanced Research Projects Agency

[Video Abstract | Full Video]

In the future, every company will be using AI, which means that every company will need a secure infrastructure that addresses AI security concerns. At the same time, the domain of computer security has been revolutionized by AI techniques, including machine learning, planning, and automatic reasoning. What are the opportunities for researchers in both fields—security infrastructure and AI—to learn from each other and continue this fruitful collaboration? This session covers two main topics. In the first half, we discuss how AI techniques have changed security, using a case study of the DARPA Cyber Grand Challenge, where teams built systems that can reason about security in real time. In the second half, we talk about security issues inherent in AI. How can we ensure the integrity of decisions from the AI that drives a business? How can we defend against adversarial control of training data? Together, we identify common problems for future research.

Social and Emotional Intelligence in AI and Agents

Panelists: Justine Cassell, Carnegie Mellon University; Jonathan Gratch, University of Southern California; Daniel McDuff, Microsoft; Louis-Philippe Morency, Carnegie Mellon University

[Full Video]

Social signals and emotions are fundamental to human interactions and influence memory, decision-making and wellbeing. As AI systems, in particular, intelligent agents, become more advanced, there is increasing interest in applications that can fulfil tasks goals, social goals and respond to emotional states. Research has shown that cognitive agents with these capabilities can increase empathy, rapport and trust with their users, amongst other things. However, designing such agents is extremely complex, as most human knowledge of emotion is implicit/tacit and defined by unwritten rules. Furthermore, these rules are culturally dependent and not universal. This session focuses on research into intelligent cognitive agents. It covers the measurement and understanding of verbal and non-verbal cues, the computational modeling of emotion and the design of sentient virtual agents.

Transforming Machine Learning and Optimization through Quantum Computing

Speakers: Helmut Katzgraber, Texas A&M; Matthias Troyer, Microsoft; Nathan Wiebe, Microsoft

[Video Abstract | Full Video]

In 1982, Richard Feynman first proposed using a “quantum computer” to simulate physical systems with exponential speed over conventional computers. Quantum algorithms can solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale machine. Quantum computers offer new methods for machine learning, including training Boltzmann machines and perceptron models. These methods have the potential to dramatically improve upon today’s machine learning algorithms used in almost every device, from cell phones to cars. But can quantum models make it possible to probe altogether different types of questions and solutions? If so, how can we take advantage of new representations in machine learning? How will we handle large amounts of data and input/output on a quantum computer? This session focuses on both known improvements and open challenges in using quantum techniques for machine learning and optimization.

Challenges and Opportunities in Human-Machine Partnership

Speakers: Eric Horvitz, Microsoft; Subbarao Kambhampati, Arizona State University; Milind Tambe, University of Southern California

[Video Abstract | Full Video]

The new wave of excitement about AI in recent years has been based on successes in perception tasks or on domains with limited and known dynamics. Because machines have achieved human parity in accuracy for image recognition and speech recognition and have beaten human champions on games such as Go and Poker, they have led to an impression of a future in which AI systems function alone. However, for more complex and open-ended tasks, current AI technologies have limitations. Future deployments of AI systems in daily life are likely to emerge from the complementary abilities of humans and machines and require close partnerships between them. The goal of this session was to highlight the potential of human-machine partnership through real-world applications. In addition, the speakers identified challenges for research and development that, when solved, will build towards successful AI systems that can partner with people.

Towards Socio-Culturally Aware AI

Speakers: Cristian Danescu-Niculescu-Mizil, Cornell University; Daniel McDuff, Microsoft; Christopher Potts, Stanford University

[Video Abstract]

How do we make AI agents appear to be more “human”? The goal of this session was to bring together researchers in human-computer interaction, linguistics, machine learning, speech, and natural language processing to discuss what is required of AI that goes beyond functional intelligence, and that helps agents display social and cultural intelligence. We present an overview of the research that we are doing at Microsoft Research India toward the goal of building socially and culturally aware AI, such as chatbots for young, urban India, and socio-linguistic norms in multilingual communities. This was followed by a panel discussion entitled “AI for socio-culturally enriching interactions: What is it and when is it a success?” This panel discussed what constitutes socio-culturally aware AI, what are the metrics of success, and what are desired outcomes.

AI, People, and Society

Speakers: Solon Barocas, Microsoft; Carla Gomes, Cornell University; Percy Liang, Stanford University; Gireeja Ranade, Microsoft

[Full Video]

Advances in AI promise great benefit to people and organizations. However, as we push the science of AI forward, we need to consider potential downsides, unintended consequences and costly outcomes. Challenges include ethical and legal issues with the use of autonomous systems, end-user distrust in reasoning, errors and biases in reasoning, the rise of inadvertent side effects, and criminal uses of AI. We discuss rising concerns with the influences of AI on people and society, and promising directions for addressing them.

Model-Based Machine Learning

Speaker: Christopher Bishop, Microsoft

[Full Video]

Today, thousands of scientists and engineers are applying machine learning to an extraordinarily broad range of domains, and over the last five decades, researchers have created literally thousands of machine learning algorithms. Traditionally an engineer wanting to solve a problem using machine learning must choose one or more of these algorithms to try, and their choice is often constrained by their familiar with an algorithm, or by the availability of software implementations. In this talk we talk about ‘model-based machine learning’, a new approach in which a custom solution is formulated for each new application. We show how probabilistic graphical models, coupled with efficient inference algorithms, provide a flexible foundation for model-based machine learning, and we describe several large-scale commercial applications of this framework. We also introduce the concept of ‘probabilistic programming’ as a powerful approach to model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

Speakers

Portrait of Sean AndristSean Andrist
Microsoft

Bio

Sean Andrist is a researcher at Microsoft Research in the adaptive systems and Interaction group. His research focuses on “situated embodied interaction,” particularly human-robot interaction in open world settings. He is investigating how a heterogeneous set of multimodal sensors can be leveraged to improve a robot’s awareness of the social context around it, and how that enhanced representation can be coupled with actions and behaviors that improve the robot’s task and social capabilities, as well as increasing user acceptance and rapport. He received his PhD from the University of Wisconsin-Madison in 2016, where he researched social gaze mechanisms for human-robot and human-agent interaction.

Portrait of Isabelle AugensteinIsabelle Augenstein
University College London

Bio

Isabelle Augenstein is a postdoctoral research associate in the department of computer science at University College London (UCL). Her main research interests are statistical natural language processing and weakly supervised learning, with applications including automated fact checking and machine reading of scientific publications. Prior to joining UCL, she was a research associate and PhD student at the University of Sheffield, a research assistant at Karlsruhe Institute of Technology and a computational linguistics student at Heidelberg University. She is currently organizing a shared task on information extraction from scientific publications at SemEval 2017, the first WiNLP workshop at ACL 2017, and a workshop on deep structured prediction at ICML.

Portrait of Kalika Bali

Kalika Bali
Microsoft

Bio

Kalika Bali is a researcher at Microsoft Research India working in the areas of machine learning, natural language systems and applications, as well as technology for emerging markets. Her research interests lie broadly in the area of speech and language technology especially in the use of linguistic models for building technology that offers natural human-computer interactions.

She is currently working on Project Mélange, which tries to understand, process and generate code-mixed language (more than one language in a single conversation) data for both text and speech. Recently, Bali has become interested in how social and pragmatic functions affect language use, and how to build effective computational models of sociolinguistics and pragmatics that can lead to more aware AI. Bali is also interested in natural language processing and speech technology for Indian languages, serving on several committees that work on Indian language technologies.

Portrait of Solon Barocas

Solon Barocas
Cornell University

Bio

Solon Barocas is an assistant professor in the department of information science at Cornell University. His research explores ethical and policy issues in AI, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference. In 2014, Barocas co-founded Fairness, Accountability, and Transparency in Machine Learning (FAT/ML), an annual event that brings together an emerging community of researchers working on these issues. He was previously a postdoctoral researcher in the New York City lab of Microsoft Research, where he was a member of the fairness, accountability, transparency, and ethics in AI group, as well as a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. He completed his doctorate in the Department of Media, Culture, and Communication at New York University, where he remains an affiliate of the Information Law Institute.

Portrait of Chris BasogluChris Basoglu
Microsoft

Bio

Chris Basoglu, PhD is a partner engineering manager in Speech Recognition Services team at Microsoft. He is responsible for the Microsoft speech recognition runtime software as well as Microsoft-wide speech service infrastructure.

Portrait of Tanya Berger-Wolf

Tanya Berger-Wolf
University of Illinois at Chicago

Bio

Tanya Berger-Wolf is a professor of computer science at the University of Illinois at Chicago, where she heads the Computational Population Biology Lab. As a computational ecologist, her research is at the unique intersection of computer science, wildlife biology, and social sciences. She creates computational solutions to address questions such as how environmental factors affect the behaviors of social animals (humans included). Berger-Wolf is also a cofounder of the conservation software nonprofit Wildbook, which recently enabled the first-of-its-kind complete species census of the endangered Grevy’s zebra, using photographs taken by ordinary citizens in Kenya.

Berger-Wolf holds a PhD in computer science from the University of Illinois at Urbana-Champaign. She has received numerous awards for her research and mentoring, including the US National Science Foundation CAREER Award, Association for Women in Science Chicago Innovator Award, and the UIC Mentor of the Year Award.

Portrait of Jeffrey BighamJeffrey P. Bigham
Carnegie Mellon University

Bio

Jeffrey P. Bigham is an associate professor in the Human-Computer Interaction and Language Technologies Institutes in the School of Computer Science at Carnegie Mellon University. He uses clever combinations of crowds and computation to build truly intelligent systems that automate themselves over time. Bigham received his BSE degree in computer science from Princeton University in 2003, and received his PhD in computer science and engineering from the University of Washington in 2009. He has been a visiting researcher at MIT CSAIL, Microsoft Research, and Google X. He has received a number of awards for his work, including the MIT Technology Review Top 35 Innovators Under 35 Award, the Alfred P. Sloan Fellowship, and the National Science Foundation CAREER Award. He is a member of the inaugural class of the ACM Future of Computing Academy.

Portrait of Chris Bishop

Christopher Bishop
Microsoft

Bio

Christopher Bishop is a Microsoft technical fellow and the laboratory director at Microsoft Research Cambridge. He is also professor of computer science at the University of Edinburgh, and a fellow of Darwin College, Cambridge. He has been elected fellow of both the Royal Academy of Engineering and the Royal Society of Edinburgh.

Bishop obtained a BA in physics from Oxford, and a PhD in theoretical physics from the University of Edinburgh, with a thesis on quantum field theory. He worked at Culham Laboratory on the theory of magnetically confined plasmas, and was head of the Applied Neurocomputing Centre at AEA Technology.

He was also a chair in the department of computer science and applied mathematics at Aston University, where he led the neural computing research group. After running the international research program on neural networks and machine learning at the Isaac Newton Institute for Mathematical Sciences in Cambridge, he joined the Microsoft Research Laboratory in Cambridge.

Olaf Blanke

Olaf Blanke
Ecole Polytechnqiue de Lausanne

Bio

Olaf Blanke is founding director of the Center for Neuroprosthetics and holds the Bertarelli Foundation Chair in Cognitive Neuroprosthetics at the Swiss Federal Institute of Technology (EPFL). He also directs the Laboratory of Cognitive Neuroscience at EPFL and is professor of neurology at the University Hospital of Geneva. Blanke’s neuroscience research is dedicated to the study of consciousness and how bodily processing of the brain encodes the self, including such fascinating alterations of the self as out-of-body experiences and ghost sensations. He has authored over 200 scientific articles, including those published in Nature, Science, Lancet and Brain, and has delivered over 200 presentations. His work includes pioneering technology research in virtual reality, augmented virtuality, brain-machine interfaces, and robotics dedicated to the control and enabling of complex subjective mental states (i.e. experience engineering). Blanke is member of the board and chief scientific advisor at Mindmaze, a virtual reality and rehabilitation company. In his medical research in neurorehabilitation and neuroprosthetics, Blanke develops devices and procedures for diagnostics and therapeutics for several neurological conditions.

Portrait of Sandy Blyth

Sandy Blyth
Microsoft

Bio

Sandy Blyth is managing director of MSR Outreach. He joined Microsoft Research in March of 2017 from the Microsoft finance team, where he ran the integration management practice of Microsoft’s Venture Integration (VI) group. His team most recently supported such acquisitions as Maluuba, Swiftkey, and LinkedIn.

Prior to joining Microsoft, Blyth was managing director of Parhelion Partners, LLC, a regional investment bank serving entrepreneurial growth companies. His technology experiences include work in hardware and software product development; management, consulting and services delivery; and sales, business development and partnerships at IBM, AT&T, Cambridge Technology Partners, Pivotal Software, and Kaivo.

Portrait of Dan BohusDan Bohus
Microsoft

Bio

Dan Bohus is a senior researcher in the adaptive systems and interaction group at Microsoft Research. Bohus’ research agenda centers on developing methods that enable interactive systems to reason more deeply about their physical surroundings and seamlessly participate in open-world, multiparty spoken dialog and collaboration with people. Before joining Microsoft Research, Dan received his PhD degree in computer science from Carnegie Mellon University.

Portrait of Rich Caruana

Rich Caruana
Microsoft

Bio

Rich Caruana is a senior researcher at Microsoft Research. His current research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology. Before joining Microsoft, Caruana was on the faculty in the computer science department at Cornell University, at UCLA’s medical school, and at Carnegie Mellon’s Center for Learning and Discovery. Caruana holds a PhD from Carnegie Mellon University. He has received an NSF CAREER Award, and three best paper awards. He co-chaired KDD in 2007 and serves as area chair for NIPS, ICML, and KDD.

Portrait of Justine CassellJustine Cassell
Carnegie Mellon University

Bio

Justine Cassell is associate dean of technology strategy and impact and professor in the School of Computer Science at Carnegie Mellon University, and Director Emerita of the Human Computer Interaction Institute. She codirects the Yahoo-CMU InMind partnership on the future of personal assistants. Previously Cassell was faculty at Northwestern University where she founded the Technology and Social Behavior Center and doctoral program. Before that she was a tenured professor at the MIT Media Lab. Cassell received the MIT Edgerton Award and Anita Borg Institute Women of Vision Award, in 2011 was named to the World Economic Forum Global Agenda Council on AI and Robotics, in 2012 was named an AAAS Fellow, and in 2016 was made a Fellow of the Royal Academy of Scotland, and named an ACM Fellow. Cassell has spoken at the World Economic Forum in Davos for the past five years on topics concerning artificial intelligence and society.

Portrait of Rama ChellappaRama Chellappa
University of Maryland

Bio

Rama Chellappa is a distinguished university professor and a Minta Martin professor at the University of Maryland (UMD). Chellappa has worked on Markov random fields, 3D recovery from images, face recognition, tracking, action recognition, compressive sensing, dictionary learning, and domain adaptation. Chellappa is a recipient of an NSF Presidential Young Investigator Award and four IBM Faculty Development Awards. Chellappa received the K.S. Fu Prize from the International Association of Pattern Recognition (IAPR). He is a recipient of the Society and Technical Achievement Awards from the IEEE Signal Processing Society and the Technical Achievement Award from the IEEE Computer Society. At UMD, he received college-level and university-level recognitions for research, teaching, innovation, and mentoring of undergraduate students. Chellappa served as the editor-in-chief of IEEE Transactions on Pattern Analysis and Machine Intelligence. He is a Fellow of IEEE, IAPR, OSA, AAAS, ACM, and AAAI and holds six patents.

Portrait of Jung Hee CheonJung Hee Cheon
Seoul National University

Bio

Jung Hee Cheon is a professor in the department of mathematical sciences and the director of the Cryptographic Hard Problems Research Initiatives (CHRI) at Seoul National University (SNU).

He received his BS and PhD degrees in mathematics from KAIST in 1991 and 1997, respectively. Before joining SNU, he worked for Electronics and Telecommunications Research Institute (ETRI), Brown University, and International Christian University (ICU). He received the best paper award in Asiacrypt 2008 and Eurocrypt 2015. His research focuses on computational number theory and cryptology. He is an associate editor of Designs, Codes and Cryptography (DCC) and Journal of Communications and Networks (JCN), and served as program committee member for Crypto, Eurocrypt, and Asiacrypt. He was a cochair of ANTS-XI and Asiacrypt 2015/2016.

Portrait of Jackie Chi Kit CheungJackie Chi Kit Cheung
McGill University

Bio

Jackie Chi Kit Cheung is an assistant professor in the School of Computer Science at McGill University, where he codirects the Reasoning and Learning Lab. He received his PhD at the University of Toronto, and was awarded a Facebook Fellowship for his doctoral research. He and his team conduct research on computational semantics and natural language generation, with the goal of developing systems that can perform complex reasoning in tasks such as event understanding and automatic summarization.

Portrait of Mary CzerwinskiMary Czerwinski
Microsoft

Bio

Mary Czerwinski is a research manager of the Visualization and Interaction (VIBE) Research Group. Czerwinski’s research focuses primarily on emotion tracking, information worker task management, and health and wellness for individuals and groups. Her background is in visual attention and multitasking. She holds a PhD in cognitive psychology from Indiana University in Bloomington. Czerwinski was awarded the ACM SIGCHI Lifetime Service Award, was inducted into the CHI Academy, and became an ACM Distinguished Scientist in 2010. Czerwinski became a fellow of the ACM in 2016. She also received the Distinguished Alumni award from Indiana University’s brain and psychological sciences department in 2014.

Portrait of Cristian Danescu-Niculescu-Mizil

Cristian Danescu-Niculescu-Mizil
Cornell University

Bio

Cristian Danescu-Niculescu-Mizil is an assistant professor in the information science department at Cornell University. His research aims at developing computational frameworks that can lead to a better understanding of human social behavior, specifically leveraging natural language datasets generated online. He is the recipient of several awards—including the WWW 2013 Best Paper Award, a CSCW 2017 Best Paper Award, and a Google Faculty Research Award—and his work has been featured in The New York Times, NPR’s All Things Considered and NBC’s The Today Show.

Portrait of Susan Dumais

Susan Dumais
Microsoft

Bio

Susan Dumais is a distinguished scientist at Microsoft, assistant director of Microsoft Research AI, and an adjunct professor in the information school at the University of Washington. Prior to joining Microsoft, she was at Bell Labs, where she worked on latent semantic analysis, techniques for combining search and navigation, and organizational impacts of information systems. Her current research focuses on user modeling and personalization, context and search, and temporal dynamics of information. Dumais has published widely, and holds several patents on novel retrieval algorithms and interfaces. She is past-chair of ACM’s Special Interest Group in Information Retrieval (SIGIR), and serves on several editorial boards, technical program committees, and government panels. She was elected to the CHI Academy in 2005, an ACM Fellow in 2006, received the SIGIR Gerard Salton Award for Lifetime Achievement in 2009, was elected to the National Academy of Engineering (NAE) in 2011, received the ACM Athena Lecturer and Tony Kent Strix Awards in 2014, was elected to AAAS in 2015, and received the Lifetime Achievement Award from Indiana University department of psychological and brain science in 2016.

Portrait of Michel GalleyMichel Galley
Microsoft

Bio

Michel Galley is a researcher at Microsoft Research. His research interests are in the areas of natural language processing and machine learning, with a particular focus on dialog, machine translation, and summarization. Galley obtained his MS and PhD from Columbia University and his BS from École polytechnique fédérale de Lausanne (EPFL), all in computer science. Before joining Microsoft Research, he was a research associate in the computer science department at Stanford University. He also spent summers visiting University of Southern California’s Information Sciences Institute and the Spoken Dialog Systems group at Bell Labs. Galley served twice as area chair at top natural language processing (NLP) conferences (ACL and NAACL), and was twice best paper finalist (NAACL 2010 and EMNLP 2013).

Portrait of Jianfeng GaoJianfeng Gao
Microsoft

Bio

Jianfeng Gao is partner research manager at the Microsoft AI and Research Group, Redmond. He works on deep learning for text and image processing and leads the development of AI systems for machine reading comprehension, question answering, dialog, and business applications. He has also been principal researcher at the Natural Language Processing Group at Microsoft Research, where he worked on web search, query understanding and reformulation, ads prediction, and statistical machine translation, and was a research lead in the Natural Interactive Services Division at Microsoft, where he worked on Project X. Previously, he was research lead in the Natural Language Computing Group at Microsoft Research Asia, where he and his colleagues developed the first Chinese speech recognition system released with Microsoft Office, the Chinese/Japanese Input Method Editors, and the natural language platform for Microsoft Windows.

Portrait of Ran Gilad-BachrachRan Gilad-Bachrach
Microsoft

Bio

Ran Gilad-Bachrach, Phd, is a machine learning researcher in the cryptology group at Microsoft Research. His current projects focus on fusing privacy technologies with AI technologies with an emphasis on applications in health. Gilad-Bachrach has been conducting research in machine learning for the past 20 years. His studies span both theoretical aspects of this field, algorithms, and applications in diverse domains such as education, web-search, and computational psychology.

Portrait of Carla GomesCarla Gomes
Cornell University

Bio

Carla Gomes is a professor of computer science and the director of the Institute for Computational Sustainability at Cornell University. Her research area is AI, with a focus on large-scale constraint reasoning, optimization, and machine learning. Recently, Gomes research has been in the new field of computational sustainability, which she helped create as a discipline. She is currently the lead PI of a Natural Science Foundation Expeditions-in-Computing that established CompSustNet, a large-scale national and international research network, to further expand the field of computational sustainability. Gomes is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and a fellow of the American Association for the Advancement of Science (AAAS).

Portrait of Mar Gonzalez-FrancoMar Gonzalez-Franco
Microsoft

Bio

Mar Gonzalez-Franco is a researcher at Microsoft Research. She studied computer science and completed her PhD in neuroscience and virtual reality in 2014 at University of Barcelona. In her research, she tries to achieve strong immersive experiences using different disciplines: virtual reality, computer graphics, computer vision, and haptics (computer touch), combined with human behavior, perception, and neuroscience. Before joining MSR she held several research positions, including at University College London, Massachusetts Institute of Technology, Tsinghua University, and Airbus Group. Her work in virtual reality has been featured in The Verge, TechCrunch, GeekWire, Fortune, and the World Economic Forum.

Portrait of Jon GratchJonathan Gratch
University of Southern California

Bio

Jonathan Gratch is director for virtual human research at the University of Southern California’s (USC) Institute for Creative Technologies, a research full professor of computer science and psychology at USC, and director of USC’s Computational Emotion Group. He completed his PhD in computer science at the University of Illinois in Urbana-Champaign in 1995. Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role in shaping human-computer interactions in virtual environments. He is the founding editor-in-chief of IEEE’s Transactions on Affective Computing, associate editor of Emotion Review and the Journal of Autonomous Agents and Multiagent Systems, and former president of the Association for the Advancement of Affective Computing. He is an AAAI Fellow, a SIGART Autonomous Agent’s Award recipient, a senior member of IEEE, and member of the Academy of Management and the International Society for Research on Emotion. Gratch is the author of more than 300 technical articles.

Portrait of Amy Greenwald

Amy Greenwald
Brown University

Bio

Amy Greenwald, PhD, is associate professor of computer science at Brown University in Providence, Rhode Island. She studies game-theoretic and economic interactions among computational agents, applied to areas such as autonomous bidding in wireless spectrum auctions and ad exchanges. She was named a Fulbright Scholar, awarded a Sloan Fellowship, nominated for the 2002 Presidential Early Career Award for Scientists and Engineers (PECASE), and named one of the Computing Research Association’s Digital Government Fellows. Before joining the faculty at Brown, Greenwald was employed by IBM’s T.J. Watson Research Center. Her paper entitled “Shopbots and Pricebots” (joint work with Jeff Kephart) was named Best Paper at IBM Research in 2000.

Portrait of Barbara J. GroszBarbara J. Grosz
Harvard University

Bio

Barbara J. Grosz is Higgins Professor of Natural Sciences in the School of Engineering and Applied Sciences at Harvard University. She has made many contributions to the field of artificial intelligence (AI) through her pioneering research in natural language processing and in theories of multiagent collaboration and their application to human-computer interaction. She was founding dean of science and then dean of Harvard’s Radcliffe Institute for Advanced Study, and she is known for her role in the establishment and leadership of interdisciplinary institutions and for her contributions to the advancement of women in science. She currently chairs the Standing Committee for Stanford’s One Hundred Year Study on Artificial Intelligence and serves on the boards of several scientific and scholarly institutes. A member of the National Academy of Engineering and the American Philosophical Society, she is a fellow of the American Academy of Arts and Sciences, the Association for the Advancement of Artificial Intelligence, and the Association for Computing Machinery, and a corresponding fellow of the Royal Society of Edinburgh. She received the 2009 ACM/AAAI Allen Newell Award and the 2015 IJCAI Award for Research Excellence, AI’s highest honor.

Portrait of Eric Horvitz

Eric Horvitz
Microsoft

Bio

Eric Horvitz is a technical fellow and director of Microsoft Research Labs. His contributions span theoretical and practical challenges with artificial intelligence. His efforts and collaborations include the fielding of learning and reasoning systems in transportation, healthcare, aerospace, ecommerce, online services, and operating systems. He has been elected fellow of the National Academy of Engineering (NAE), the Association for the Advancement of AI (AAAI), the American Association for the Advancement of Science (AAAS), and the American Academy of Arts and Sciences. He received the Feigenbaum Prize and the Allen Newell Award for research contributions in AI. He was inducted into the CHI Academy for advances in human-computer collaboration. He has served as president of AAAI, chair of the AAAS Section on Computing, and on advisory committees for the National Institutes of Health, the National Science Foundation, the Computer Science and Telecommunications Board (CSTB), DARPA, and the President’s Council of Advisors on Science and Technology.

He received PhD and MD degrees from Stanford University.

Portrait of Gang HuaGang Hua
Microsoft

Bio

Gang Hua is a principal researcher/research manager at Microsoft Research. He was an associate professor of computer science at Stevens Institute of Technology between 2011 and 2015, while holding an academic advisor position at IBM T. J. Watson Research Center. Before that, he was a research staff member at IBM T. J. Watson Research Center, a senior researcher at Nokia Research Center, and a scientist at Microsoft Live Labs Research. He received his PhD in electrical and computer engineering from Northwestern University in 2006. He will serve as a program chair for CVPR 2019, the flagship computer vision conference. He is the recipient of the 2015 IAPR Young Biometrics Investigator Award. He is an IAPR Fellow, an ACM Distinguished Scientist, and a senior member of the IEEE. He holds 19 US patents and has 12 more patents pending.

Portrait of Katsu IkeuchiKatsushi Ikeuchi
Microsoft

Bio

Katsushi Ikeuchi is a principal researcher of Microsoft Research. He received his PhD in information engineering from the University of Tokyo in 1978. He worked at MIT-AI Lab as a postdoc fellow for three years, at Electrotechnical Laboratory (currently AIST) as a research member for five years, at CMU-Robotics Institute as a faculty member for 10 years, and at the University of Tokyo as a faculty member for 19 years, joining Microsoft Research in 2015. His research interests span computer vision, robotics, and computer graphics. He has received several awards, including IEEE-PAMI Distinguished Researcher Award, the Okawa Prize, and the Medal of Honor with purple ribbon from the Emperor of Japan. He is a fellow of IEEE, IEICE, IPSJ, and RSJ.

Portrait of Kori Inkpen

Kori Inkpen
Microsoft

Bio

Kori Inkpen is a principal researcher and research manager at Microsoft Research, focusing on human-computer interaction and computer-supported collaboration. She explores collaboration across a variety of domains including home, work, education, healthcare and fun, with a current focus on video communication for telepresence. Inkpen manages the neXus group at Microsoft Research, which combines research in social computing, computer-supported collaborative work, and information visualization. She has also been a professor of computer science at Dalhousie University and Simon Fraser University. She received her PhD in computer science from the University of British Columbia.


F
Portrait of Prateek JainPrateek Jain
Microsoft

Bio

Prateek Jain is a researcher at Microsoft Research India. He received his PhD in computer science from the University of Texas at Austin and his bachelor of technology degree in computer science from IIT Kanpur. He is interested in high-dimensional statistics/optimization, non-convex optimization, and numerical linear algebra. He has served on several senior program committees for top machine learning conferences and also won ICML-2007 and CVPR-2008 best student paper awards.

Portrait of Ana Tajadura-JiménezAna Tajadura-Jiménez
Universidad Loyola Andalucía & University College London

Bio

Ana Tajadura-Jiménez studied telecommunications engineering at Universidad Politécnica de Madrid. She obtained an MSc in Digital Communications Systems and Technology and a PhD in applied acoustics at Chalmers University of Technology, Sweden. Tajadura-Jiménez was a post-doctoral researcher in the Lab of Action and Body at Royal Holloway, University of London, an ESRC Future Research Leader at University College London Interaction Centre (UCLIC), and principal investigator (PI) of the project The Hearing Body. Since 2016 Tajadura-Jiménez has been a Ramón y Cajal research fellow at Universidad Loyola Andalucía (ULA) and Honorary Research Associate at UCLIC. At ULA, she is part of the Human Neuroscience Laboratory and coordinates the research line called “Multisensory stimulation to alter the perception of body and space, emotion and motor behavior.” She is currently PI of the Project Magic Shoes. Tajadura-Jiménez’s research is empirical and multidisciplinary, combining perspectives of psychoacoustics, neuroscience, and human/computer interaction.

Portrait of Lucas JoppaLucas Joppa
Microsoft

Bio

Lucas Joppa is the chief environmental scientist for Microsoft, identifying the role that Microsoft artificial intelligence technologies can play in assisting with global environmental solutions. Topics of interest include some of the hardest challenges in environmental sustainability, including mitigating and adapting to changing climates, ensuring robust food systems and resilient water supplies, and stemming the loss of biodiversity. Joppa also provides external leadership in the technology and science communities through boards, speaking, and publications. Previously, Joppa led science programs at Microsoft Research, focusing on the use of artificial intelligence, machine learning, and ubiquitous computing technologies for monitoring, modeling, and managing earth’s natural environments.

Portrait of Sham Kakade

Sham Kakade
University of Washington

Bio

Sham Kakade is a Washington Research Foundation data science chair, with a joint appointment in both the computer science and engineering and statistics departments at the University of Washington. He completed his PhD at the Gatsby Computational Neuroscience unit at University College London, and earned his BS in physics at Caltech. Before joining the University of Washington, Kakade was a principal research scientist at Microsoft Research, New England. Prior to this, he was an associate professor at the department of statistics, Wharton, University of Pennsylvania, and an assistant professor at the Toyota Technological Institute at Chicago. He works on both theoretical and applied questions in machine learning and artificial intelligence, focusing on designing both statistically and computationally efficient algorithms for machine learning, statistics, and artificial intelligence. He has chaired many conferences and received numerous awards.

Portrait of Ece Kamar

Ece Kamar
Microsoft

Bio

Ece Kamar is a researcher at the adaptive systems and interaction group at Microsoft Research Redmond. Kamar earned her PhD in computer science from Harvard University. She has served as area chair and program committee member for various conferences on AI and was a member of the first AI 100 panel, studying how AI will affect the way we live. She works on several subfields of AI; including planning, machine learning, multi-agent systems and human-computer teamwork, with a focus on combining machine and human intelligence in real-world applications.

Portrait of Subbarao KambhampatiSubbarao Kambhampati
Arizona State University

Bio

Subbarao Kambhampati (Rao) is a professor of computer science at Arizona State University, and is the current president of the Association for the Advancement of AI (AAAI), and a trustee of the Partnership for AI. His research focuses on automated planning and decision making, especially in the context of human-aware AI systems. He is an award-winning teacher and spends significant time pondering the public perceptions and societal impacts of AI. He was a National Science Foundation young investigator, and is a fellow of AAAI. He has served the AI community in multiple roles, including as the program chair for IJCAI 2016 and program co-chair for AAAI 2005. Kambhampati received his bachelor’s degree from Indian Institute of Technology, Madras, and his PhD from University of Maryland, College Park.

Portrait of Shaun KaneShaun Kane
University of Colorado Boulder

Bio

Shaun Kane is an assistant professor in the department of computer science and the department of information science at the University of Colorado Boulder. He is director of the CU Superhuman Computing Lab. His research explores accessible user interfaces, tangible interaction, and wearable computing. His research has been supported by a Google Lime Scholarship, an NSF CAREER Award, and an Alfred P. Sloan Fellowship. He received his PhD from The Information School at the University of Washington in 2011.

Portrait of Ravi Kannan

Ravi Kannan
Microsoft

Bio

Ravi Kannan is a principal researcher at Microsoft Research India, where he leads the algorithms research group. He also holds an adjunct faculty position in the computer science and automation department of the Indian Institute of Science. Before joining Microsoft, Kannan was the William K. Lanman, Jr. professor of computer science and applied mathematics at Yale University. He has also taught at MIT and CMU. Kannan’s research interests include algorithms, theoretical computer science and discrete mathematics, as well as optimization. He was awarded the Knuth Prize for developing influential algorithmic techniques aimed at solving long-standing computational problems, the Fulkerson Prize for his work on estimating the volume of convex sets, and the Distinguished Alumnus award of the Indian Institute of Technology, Bombay.

Portrait of Helmut KatzgraberHelmut Katzgraber
Texas A&M University

Bio

Helmut Katzgraber is a professor in the physics and astronomy department at Texas A&M University. His main research fields in computational physics are the investigation of disordered and complex systems, as well as the study of problems related to quantum computing. He received his PhD in physics in 2001 at the University of California, Santa Cruz, for numerical studies of spin-glass systems, and has held postdoctoral positions at the University of California, Davis, and the Institute for Theoretical Physics at ETH Zurich. In 2007 he was awarded a Swiss National Science Foundation professorship and in 2009 he joined Texas A&M University. In 2011 he received an NSF CAREER award. He is also external faculty member at the Santa Fe Institute in New Mexico and consults for 1QB Information Technologies and Microsoft Research.

Taesoo KimTaesoo Kim
Georgia Institute of Technology

Bio

Taesoo Kim is an assistant professor in the School of Computer Science at Georgia Tech. He also serves as the director of the Georgia Tech Systems Software and Security Center (GTS3). His research focus area is system security, particularly their design, implementation, and clear separation of trusted components. His thesis work focused on detecting and recovering from attacks on computer systems. He holds a BS from KAIST, and an SM and PhD from MIT in computer science.

Portrait of Ian Lane

Ian Lane
Carnegie Mellon University

Bio

Ian Lane, PhD, is an associate research professor at Carnegie Mellon University, working in the areas of speech recognition, natural language understanding, and situated interaction. Lane leads a research group of 10 PhD students in Silicon Valley focused on these research areas. Lane’s group has done novel research in the areas of GPU-accelerated speech recognition, context-aware spoken language understanding, and more recently his group has demonstrated the effectiveness of end-to-end trainable models for speech recognition and dialog. He has more than 100 peer-reviewed publications and has obtained numerous patents and awards for his work.

Portrait of Walter LaseckiWalter S. Lasecki
University of Michigan

Bio

Walter S. Lasecki is an assistant professor of computer science and engineering at the University of Michigan, Ann Arbor, where he directs the Crowds+Machines (CROMA) Lab. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. These systems let people be more productive, and improve access to the world for people with disabilities. Lasecki received his PhD and MS from the University of Rochester in 2015 and a BS in computer science and mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google X.

Portrait of Kristin Lauter

Kristin Lauter
Microsoft

Bio

Kristin Lauter is a principal researcher and research manager for the cryptography group at Microsoft Research. Her personal research interests include algorithmic number theory, elliptic curve, pairing-based and lattice-based cryptography, homomorphic encryption, post-quantum cryptography, and cloud security and privacy.

Lauter is currently serving as past president of the Association for Women in Mathematics, and on the council of the American Mathematical Society. She was selected to be a fellow of the American Mathematical Society in 2014. She was a cofounder of the Women in Numbers network and is also an affiliate professor in the department of mathematics at the University of Washington. In 2008, Lauter, together with her coauthors, was awarded the Selfridge Prize in computational number theory.

Portrait of Percy LiangPercy Liang
Stanford University

Bio

Percy Liang is an assistant professor of computer science at Stanford University. He holds a BS from MIT and a PhD from UC, Berkeley. His research interests include modeling natural language semantics and developing machine learning methods that infer rich latent structures from limited supervision. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and the best student paper at the International Conference on Machine Learning (2008).

Portrait of Rangan Majumder

Rangan Majumder
Microsoft

Bio

Rangan Majumder is the group program manager for search and artificial intelligence in the Bing division. His team uses AI techniques such as machine learning to solve customer and business problems across various products including finding what you are looking for on the web through Bing and answering open domain questions through Cortana. He also manages the search platform that runs all the complex algorithms, including large deep learning models, to keep Bing running at high scale with low latency and high availability.

Portrait of Daniel McDuffDaniel McDuff
Microsoft

Bio

Daniel McDuff is a researcher at Microsoft and works on scalable tools to enable the automated recognition and analysis of emotions and physiology. He is also a visiting scientist at Brigham and Women’s Hospital in Boston. McDuff completed his PhD in the affective computing group at the MIT Media Lab in 2014 and has a BA and MS from Cambridge University. Previously, McDuff was director of research at Affectiva and a post-doctoral research affiliate at the MIT Media Lab. During his PhD, he built state-of-the-art facial expression recognition software and led the analysis of the world’s largest database of facial expression videos. His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR, and the Center for Integrated Medicine and Innovative Technology (CIMIT). His projects have been reported in many publications including The Times, The New York Times, The Wall Street Journal, BBC News, New Scientist, and Forbes magazine. McDuff was named a 2015 WIRED Innovation Fellow and has spoken at TEDx Berlin.

Portrait of LP-MorencyLouis-Philippe Morency
Carnegie Mellon University

Bio

Louis-Philippe Morency is assistant professor in the Language Technology Institute at Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was formerly research assistant professor in the Computer Sciences Department at University of Southern California (USC) and research scientist at USC Institute for Creative Technologies. Morency received his PhD and Master’s degrees from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations that enable computers to analyze, recognize, and predict subtle human communicative behaviors during social interactions. In particular, Morency was lead co-investigator for the multi-institution effort that created SimSensei and MultiSense, two technologies to automatically assess nonverbal behavior indicators of psychological distress. He is currently chair of the advisory committee for ACM International Conference on Multimodal Interaction and associate editor at IEEE Transactions on Affective Computing.

Portrait of Meredith Ringel MorrisMeredith Ringel Morris
Microsoft

Bio

Meredith Ringel Morris is a principal researcher at Microsoft Research, where she is affiliated with the Ability, Enable, and neXus research teams. She is also an affiliate faculty member at the University of Washington, in both the department of computer science and engineering and the School of Information. Morris earned a PhD in computer science from Stanford University in 2006, and did her undergraduate work in computer science at Brown University. Her primary research area is human-computer interaction, specifically computer-supported cooperative work and social computing. Her current research focuses on the intersection of computer-supported cooperative work and social computing (CSCW) and accessibility, creating technologies that facilitate people with disabilities in connecting with others in social and professional contexts. Past research contributions include work in facilitating cooperative interactions in the domain of surface computing, and in supporting collaborative information retrieval via collaborative web search and friendsourcing.

Portrait of Besmira NushiBesmira Nushi
Microsoft

Bio

Besmira Nushi is a researcher at the Adaptive Systems and Interaction group at Microsoft Research Redmond. Nushi obtained her PhD from ETH Zurich in Switzerland, where she did research in the intersection of machine learning and human computation. Prior to her PhD studies, she completed a double-degree master’s program in computer science as an Erasmus Mundus scholar at the University of Trento (Italy) and RWTH University of Aachen (Germany). Her work focuses on the integration of machine computation and collective crowd intelligence for solving problems that are difficult to solve otherwise. Her research interests include machine learning, crowdsourcing, integrative intelligent systems, human-computer interaction, and data management.

Portrait of Sayan PathakSayan Pathak
Microsoft

Bio

Sayan Pathak, PhD, is a principal machine learning scientist in the Cognitive Toolkit (formerly CNTK) team at Microsoft. He has published and commercialized cutting-edge computer vision and machine learning technology applied to medical imaging, neuroscience, computational advertising, and social network domains. Prior to joining Microsoft, he worked at Allen Institute for Brain Science. He has been a consultant to several startups and principal investigator on several US National Institutes of Health (NIH) grants. He has been a faculty member at the University of Washington for 15 years and has been affiliate professor in CSE at the Indian Institute of Technology, Kharagpur, India, for more than four years.

Portrait of Christopher Potts

Christopher Potts
Stanford University

Bio

Christopher Potts is professor of linguistics and Computer Science, and director of the Center for the Study of Language and Information (CSLI), at Stanford. In his research, he uses computational methods to explore how emotion is expressed in language and how linguistic production and interpretation are influenced by the context of utterance. He is the author of the 2005 book The Logic of Conventional Implicatures as well as numerous scholarly papers in computational and theoretical linguistics. He received his PhD in linguistics from the University of California at Santa Cruz.

Portrait of Yanmin QianYanmin Qian
Shanghai Jiao Tong University

Bio

Dr. Yanmin Qian is an associate professor in Shanghai Jiao Tong University (SJTU), China. He received his PhD in the department of electronic engineering from Tsinghua University, China. He was an assistant professor in the department of computer science and engineering in Shanghai Jiao Tong University, and also worked as an associate researcher at the speech group in Cambridge University engineering department, UK. Today, he is an associate professor at SJTU. He was one of the key members to design and implement the Cambridge Multi-Genre Broadcast (MGB) Speech Processing system, which won all four tasks of the first MGB Challenge in 2015. He is a member of IEEE and ISCA. He has published more than 60 papers on speech and language processing. His current research interests include acoustic and language modeling in speech recognition, speaker and language recognition, speech separation, natural language understanding, deep learning and multi-media signal processing.

Portrait of Gireeja Ranade

Gireeja Ranade
Microsoft

Bio

Gireeja Ranade is a postdoctoral researcher at Microsoft Research Redmond, working with the theory group and the adaptive systems and interaction group. She has been a lecturer in electrical engineering and computer sciences (EECS) at UC Berkeley, where she received her MS and PhD. She also holds an SB in EECS from MIT. Her research interests include control for autonomous systems, information theory, wireless communications, brain-machine interfaces, and crowdsourcing.

Portrait of Alan RitterAlan Ritter
Ohio State University

Bio

Alan Ritter is an assistant professor in computer science at Ohio State University. His research interests include natural language processing, social media analysis, and machine learning. Ritter completed his PhD at the University of Washington and was a postdoctoral fellow in the Machine Learning Department at Carnegie Mellon University. He has received an NDSEG fellowship, a best student paper award at IUI, an NSF CRII, and has served as an area chair for ACL, EMNLP, and NAACL.

Portrait of Harry Shum

Harry Shum
Microsoft

Bio

Harry Shum, PhD, is executive vice president of Microsoft’s Artificial Intelligence (AI) and Research group.

He is responsible for driving the company’s overall AI strategy and forward-looking research and development efforts spanning infrastructure, services, apps and agents. He oversees AI-focused product groups—the Information Platform Group, Bing, and Cortana product groups—and the ambient computing and robotics teams.

He also leads Microsoft Research, one of the world’s premier computer science research organizations, and its integration with the engineering teams across the company.

Previously, Shum served as the corporate vice president responsible for Bing search product development from 2007 to 2013. Prior to that, he oversaw the research activities at Microsoft Research Asia and the lab’s collaborations with universities in the Asia-Pacific region, and was responsible for the Internet Services Research Center, an applied research organization dedicated to advanced technology investment in search and advertising at Microsoft. He joined Microsoft Research in 1996 as a researcher.

Shum is an IEEE Fellow and an ACM Fellow for his contributions to computer vision and computer graphics. He received his PhD in robotics from the School of Computer Science at Carnegie Mellon University. In 2017, he was elected to the National Academy of Engineering of the United States.

Portrait of Mel SlaterMel Slater
University of Barcelona

Bio

Mel Slater, DSc, is an ICREA research professor at the University of Barcelona in psychology. He has been professor of virtual environments at University College London since 1997 in the department of computer science. He has been involved in research in virtual reality since the early 1990s, and since 1989 has been supervisor of 38 PhDs in graphics and virtual reality. In 2005 he was awarded the Virtual Reality Career Award by IEEE Virtual Reality “In Recognition of Seminal Achievements in Engineering Virtual Reality.” He held a European Research Council grant TRAVERSE. He is field editor of Frontiers in Robotics and AI, and chief editor of the Virtual Environments section. He has contributed to the scientific study of virtual reality and to technical development of this field including its applications in clinical psychology and the cognitive neuroscience of how the brain represents the body. He is a founder of the company Virtual Bodyworks S.L. and is currently Immersive Fellow at Digital Catapult London.

Dawn SongDawn Song
University of California, Berkeley

Bio

Dawn Song is a professor in the department of electrical engineering and computer science at UC Berkeley. Her research interest lies in deep learning and security. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, the Faculty Research Award from IBM, Google and other major tech companies, and best paper awards from top conferences. She obtained her PhD degree from UC Berkeley. Prior to joining UC Berkeley, she was an assistant professor at Carnegie Mellon University.

Portrait of Krysta SvoreKrysta Svore
Microsoft

Bio

Krysta Svore is a principal researcher at Microsoft Research, where she manages the Quantum Architectures and Computation (QuArC) group. Svore’s research includes the development and implementation of quantum algorithms, including the design of a scalable, fault-tolerant software architecture for translating a high-level quantum program into a low-level, device-specific quantum implementation, and the study of quantum error correction codes and noise thresholds. She has also developed machine-learning methods for web applications, including ranking, classification, and summarization algorithms. Svore received an ACM Best of 2013 Notable Article award. In 2010, she was a member of the winning team of the Yahoo! Learning to Rank Challenge. She received her PhD in computer science with highest distinction from Columbia University in 2006 and her BA from Princeton University in Mathematics and French in 2001. She is a senior member of the Association for Computing Machinery (ACM), serves as a representative for the Academic Alliance of the National Center for Women and Information Technology (NCWIT), and is an active member of the American Physical Society (APS).

Portrait of Milind TambeMilind Tambe
University of Southern California

Bio

Milind Tambe is the Helen N. and Emmett H. Jones Professor in Engineering at University of Southern California (USC) and founding codirector of CAIS, the USC Center for AI in Society. He is a fellow of AAAI and ACM, and recipient of ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland Security Award, INFORMS Wagner Prize for Excellence in Operations Research Practice, Rist Prize of the Military Operations Research Society, as well as influential paper award and multiple best paper awards at conferences such as AAMAS, IJCAI, IAAI, and IVA. Tambe’s pioneering real-world deployments of his “security games” research based on computational game theory has led him and his team to receive commendations from the US Coast Guard, the US Federal Air Marshals Service, and LA Airport Police. He has also cofounded a company based on his research, Avata Intelligence, where he serves as the director of research.

Portrait of Indrani Medhi ThiesIndrani Medhi Thies
Microsoft

Bio

Indrani Medhi Thies is a researcher in the technology for emerging markets group at Microsoft Research in Bangalore, India. Her research interests are in the areas of user interfaces, user experience design, and information and communication technologies for global development. Thies’ primary work has been in user interfaces for low-literate and novice technology users, in which she is considered a world expert. Her recent work is in user experience of conversational agents, mainly chatbots. Thies’ distinctions include the 2017 ACM SIGCHI Social Impact award, an MIT TR35 award, ACM SIGCHI and ACM CSCW best paper honorable mentions, a “Young Indian Leader” award from CNN IBN, and being featured on the list of Fortune magazine’s 2010 “50 Smartest People in Technology”. Thies has a PhD from the Industrial Design Centre, IIT Bombay, India.

Portrait of Matthias TroyerMatthias Troyer
Microsoft

Bio

Matthias Troyer is a principal researcher in the Quantum Architectures and Computation (QuArC) Group at Microsoft Research. He received his PhD in 1994 from ETH Zurich, and held a postdoctoral position at the University of Tokyo. He has been professor of computational physics at ETH Zurich until taking a leave of absence to join the Microsoft quantum computing program at the beginning of 2017. He is a fellow of the American Physical Society, a trustee of the Aspen Center for Physics, and recipient of the Rahman Prize for Computational Physics of the American Physical Society. His research interests span from high-performance computing and quantum computing to the simulations of quantum devices and island ecosystems.

Portrait of Lucy VanderwendeLucy Vanderwende
Microsoft

Bio

Lucy Vanderwende’s research focuses on the acquisition and representation of semantic information, specifically the implicit meaning inferred from explicit signals, both linguistic and nonlinguistic. Vanderwende holds a PhD in computational linguistics from Georgetown University. Lucy worked at IBM Bethesda on natural language processing, and was a visiting scientist at the Institute for Systems Science in Singapore. Vanderwende was program cochair for NAACL in 2009 and general chair for NAACL in 2013. She is also affiliate associate faculty at University of Washington Department of Biomedical Health Informatics, and a member of the UW BioNLP group, which is using NLP technology to extract critical information from patient reports.

Portrait of Santosh Vempala

Santosh Vempala
Georgia Institute of Technology

Bio

Santosh Vempala is a distinguished professor of computer science at the Georgia Institute of Technology. His main work has been in the area of theoretical computer science. Vempala attended Carnegie Mellon University, where he received his PhD. In 1997, he was awarded a Miller Fellowship at Berkeley, and was subsequently a professor at MIT in the mathematics department, and then moved to Georgia Tech. His work has been in the areas of algorithms, randomized algorithms, computational geometry, and computational learning theory. He has authored books on random projection and spectral methods. Vempala has received numerous awards, including a Guggenheim Fellowship, Sloan Fellowship, and was listed in Georgia Trend’s 40 under 40. He was named a Fellow of ACM in 2015.

Evelyne ViegasEvelyne Viegas
Microsoft

Bio

Evelyne Viegas is a director, artificial intelligence, at Microsoft. In her current role, she creates initiatives that focus on intelligent information as an enabler of innovation, working in partnership with business groups, universities and government agencies worldwide. In particular, she develops AI programs that encourage AI experimentation via cloud-based services, and emphasize the notion of co-opetitions, or collaborative competitions, to drive open innovation.

Prior to her present role, Viegas was a technical lead at Microsoft delivering natural language processing components to Microsoft Office and Windows. Before that, she completed her PhD in France and worked on machine translation as a principal investigator at the Computing Research Laboratory in New Mexico. Viegas serves on international editorial, program and award committees.

Mike WalkerMike Walker
Microsoft

Bio

Mike Walker is a principal researcher at Microsoft working on security AI. Prior to joining Microsoft, Mike led DARPA’s Cyber Grand Challenge, a two-year, $60 million contest to construct and complete the first prototypes of reasoning cyberdefense AI. In 2016 at the DEF CON hacking contest, these prototypes took their first flight into the game of hackers, Capture the Flag (CTF), landing zero-day exploits and writing patches in a fully autonomous battle. Walker has worked in a policy advisory role, testifying to the President’s Commission on Cybersecurity and serving as contributor and panelist to Center for Strategic and International Studies’ (CSIS) Surviving on a Diet of Poisoned Fruit. Prior to DARPA, he worked as a research lab leader and principal vulnerability researcher focusing on bringing the power of supercomputer-based automation to the field of software safety. Walker has played in DEF CON CTF finals, coached CTF teams, and built CTFs throughout his career.

Portrait of Nathan WiebeNathan Wiebe
Microsoft

Bio

Nathan Wiebe is a researcher in quantum computing who focuses on quantum methods for machine learning and simulation of physical systems. His work has provided the first quantum algorithms for deep learning, least squares fitting, quantum simulations using linear-combinations of unitaries, Hamiltonian learning, and efficient Bayesian phase estimation, and he also has pioneered the use of particle filters for characterizing quantum devices as well as many other contributions. He is currently a researcher in the Microsoft Research Station Q, Quantum Architectures and Computing Group (QuArC).

Jason WilliamsJason D. Williams
Microsoft

Bio

Jason D. Williams has published more than 55 peer-reviewed papers on dialog systems and related areas, and has received five best paper/presentation awards for work on statistical approaches to dialog systems, including the use of POMDPs (partially observable Markov decision processes), reinforcement learning, turn taking, and empirical user studies. In 2012, he initiated the Dialog State Tracking Challenge series; in 2014, he shipped components of the first release of Microsoft Cortana; and in 2015, he launched Microsoft Language Understanding Intelligent Service. He is president of SIGDIAL, and an elected member of the IEEE Speech and Language Technical Committee (SLTC) in the area of spoken dialog systems. Prior to Microsoft, Williams held positions at AT&T Labs Research, Tellme Networks, and McKinsey. Over the past 15 years, his systems have conducted tens of millions of dialogs with real users.

Portrait of Cha ZhangCha Zhang
Microsoft

Bio

Dr. Cha Zhang is a principal researcher at Microsoft Research, and he currently manages the Microsoft Cognitive Toolkit (CNTK) team. He received BS and MS degrees from Tsinghua University, Beijing, China both in electronic engineering, and holds a PhD in electrical and computer engineering from Carnegie Mellon University. Before joining the CNTK team, he spent more than 10 years developing audio/image/video processing and machine learning techniques, and has published over 80 technical papers and holds 20+ US patents. He won the best paper award at ICME 2007 and the best student paper award at ICME 2010. He was the program co-chair for VCIP 2012, and the general co-chair for ICME 2016. He currently serves as an associate editor for IEEE Transactions on Circuits and Systems for Video Technology, and IEEE Transactions on Multimedia.

Portrait of Song-Chun ZhuSong-Chun Zhu
University of California, Los Angeles

Bio

Song-Chun Zhu is professor of statistics and computer science at UCLA, and director of the UCLA Center for Vision, Learning, Cognition, and Autonomy. He received a PhD from Harvard University in 1996, and has worked in vision, learning, cognition, natural language processing, AI, cognitive robots, and more. His work in computer vision received the D. Marr Prize in 2003 for image parsing (with Tu et al.), Marr Prize honorary nominations in 1999 for texture modeling and in 2007 for object modeling (with Y. Wu et al.). He received the J.K. Aggarwal prize from the International Association of Pattern Recognition in 2008 for “contributions to a unified foundation for visual pattern conceptualization, modeling, learning, and inference.” He received the Helmholtz Test-of-Time Prize at ICCV 2013. As a junior faculty member, he received the Sloan Fellow in Computer Science, NSF Career Award, and ONR Young Investigator Award in 2001. He is a fellow of the IEEE Computer Society since 2011. He is leading two consecutive Office of Naval Research Multidisciplinary University Research Initiatives projects on scene/event understanding and commonsense reasoning, respectively. He has twice served as a general chair for the Conference on Computer Vision and Pattern Recognition, in 2012 and 2019.

Portrait of Roy ZimmermannRoy Zimmermann
Microsoft

Bio

Roy Zimmermann is a director in Microsoft Research Outreach where he leads special initiatives and helps strengthen Microsoft institutional relationships with universities and other partners around the world. Roy works with research and product groups inside Microsoft to help amplify their work and strengthen and foster new research partnerships and relationships with universities, governments, industry, and other organizations around the world.

Prior to joining Microsoft, Zimmerman spent 11 years working with public and private sector partners and universities in countries in Africa, Asia, and the Middle East helping to increase access to and improve the quality of education through appropriate uses of technology. He spent five years working with public television and PBS Kids developing new educational media and children’s television programming. Zimmerman has worked as a classroom teacher in the United States and overseas with the Peace Corps in Papua New Guinea. He received a PhD in education from UCLA, where his research focused on effective integration of technology in secondary schools.

Technology Showcase

Technology Showcase

Accelerating Research Using Networked FPGAs

Presenter: Dan Fay

[Full Video]

Project Catapult connects FPGAs together through a network to create a hyperscale, reconfigurable accelerator fabric. See how to use the Project Catapult cluster at the Texas Advanced Computing Center (TACC) for research. Apply for access at aka.ms/catapult-academic.

AI for Earth Classification

Presenter: Lucas Joppa

[Video Abstract]

Understanding the land cover types and locations within specific regions enables effective environmental conservation. With sufficiently high spatial and temporal resolution, scientists and planners can identify which natural resources are at risk and the level of risk. This information helps inform decisions about how and where to focus conservation efforts. Current land cover products don’t meet these spatial and temporal requirements. Microsoft AI for Earth Program’s Land Cover Classification Project will use deep learning algorithms to deliver a scalable Azure pipeline for turning high-resolution US government images into categorized land cover data at regional and national scales. The first application of the platform will produce a land cover map for the Puget Sound watershed. This watershed is Microsoft’s own backyard and one of the nation’s most environmentally and economically complex and dynamic landscapes.

Bing Visual Search

Presenter: Linjun Yang

[Video Abstract | Full Video]

Visual search, AKA search by image, is a new way of searching for information using an image or part of an image as the query. Similar to text search, which connects keyword queries to knowledge on the web, the ultimate goal of visual search is to connect camera captured data or images to web knowledge. Bing has been continuously improving its visual search feature, which is now available on Bing desktop, mobile, and apps, as well as Edge browser. It can be used not only for searching for similar images but also for task completion, such as looking for similar products while shopping. Bing image search now also features image annotation and object detection, to further improve the user experience. This demo will show these techniques and the scenarios for which the techniques were developed.

Cortana, Your Personal Assistant

Presenter: Kate Kelly

[Full Video]

From ferry schedules to dinner reservations, Cortana is the digital assistant designed to help people get things done. Cortana will eventually be everywhere people need assistance—on the phone, PC, Xbox One, and other places like the home and car. Cortana is part of the Microsoft portfolio of intelligent products and services, and current research is designed to take it beyond voice search to create an assistant that is truly intelligent.

Custom Vision Service

Presenter: Anna Roth

[Video Abstract | Full Video]

This demo shows how Custom Vision Service can be applied to many AI vision applications. For example, if a client needs to build a custom image classifier, they can submit a few images of objects, and a model is deployed at the touch of a button. Microsoft Office is also using Custom Vision Service to automatically caption images in PowerPoint.

Customizing Speech Recognition for Higher Accuracy Transcriptions

Presenter: Mike Seltzer

[Video Abstract | Full Video]

Two of the most important components of speech recognition systems are the acoustic model and the language model. Those models behind Microsoft’s speech recognition engine have been optimized for certain usage scenarios, such as interacting with Cortana on a smart phone, searching the web by voice, or sending text messages to a friend. But if a user has specific needs, such as recognizing domain-specific vocabulary or the ability to understand accents, then the acoustic and language models need to be customized. This demo shows the benefits of customizing acoustic and language models to improve the accuracy of speech recognition for lectures. Using the Custom Speech Service (Cognitive Service) technics, the demo shows how the technology can tune speech recognition for specific topic and lecturers.

Deep Artistic Style Transfer: From Images to Videos

Presenter: Gang Hua

[Video Abstract | Full Video]

This demo demonstrates several applications of Microsoft’s recent work in artistic style transfer for images and videos. One technology, called StyleBank, provides an explicit representation for visual styles with a feedforward deep network that can clearly separate the content and style from an image. This framework can render stylized videos online, achieving more stable rendering results than in the past. In addition, the Deep Image Analogy technique takes a pair of images, transferring the visual attributes from one to the other. It enables a wide variety of applications in artistic effects.

DeepFind: Searching within Documents to Answer Natural Language Questions

Presenter: Guihong Cao

[Video Abstract | Full Video]

DeepFindSearching within web documents on mobile devices is difficult and unnatural: ctrl-f searches only for exact matches, and it’s hard to see the search results. DeepFind takes a step toward solving this problem by allowing users to search within web documents using natural language queries and displays snippets from the document that answer the user’s questions.

Users can interact with DeepFind on bing.com, m.bing.com, and the Bing iOS App in two different ways: as an overlay experience, which encourages exploration and follow-up questions, or as a rich carousel of document snippets integrated directly into the search engine results pages, which proactively answers the user’s question.

Human-Robot Collaboration

Presenter: David Baumert

[Video Abstract]

This demonstration uses Softbank’s Pepper robot as testbed hardware to show a set of human-collaboration activities based on Microsoft Cognitive Services and other Microsoft Research technologies.

As both a research and prototype-engineering effort, this project is designed to implement software technology and learn from concepts such as Brooks’ subsumption architecture, which distributes the brain activities of the robot between the local device for reflex functions, the local facility infrastructure for recognition functions, and remote API services hosted in the cloud for cognitive functions. This implementation is designed to be machine-independent and relevant to all robots requiring human-collaboration capabilities. This approach has supported new investigations such as non-verbal communication and body movements expressed and documented using Labanotation, making it possible for a robot to process conversations with humans and automatically generate life-like and meaningful physical behaviors to accompany its spoken words.

InfoBots: AI-Powered Conversational QnA Systems

Presenter: Nilesh Bhide

[Video Abstract | Full Video]

As we move into the world of messaging apps, bots and botification of content, users are starting to move from keyword searches to relying on bots and assistants for their information seeking needs. Bing has built InfoBots, a set of AI- and Bing-powered QnA capabilities that bots can leverage to help users with their information-seeking needs. InfoBots QnA capabilities are tuned for answering any information-seeking question from a wide variety of content (Open domain content from the Internet, specific vertical domain content, etc.). InfoBots supports conversational QnA through multi-turn question and answer understanding to answer natural-language-based questions. InfoBots capabilities have applications in both consumer and enterprise contexts.

InstaFact—Bringing Knowledge to Office Apps

Presenter: Silviu-Petru Cucerzan

[Video Abstract]

This demo shows how InstaFact brings the information and intelligence of the Satori knowledge graph into Microsoft’s productivity software. InstaFact can automatically complete factual information in the text a user is writing or can verify the accuracy of facts in text. It can infer the user’s needs based on data correlations and simple natural-language clues. It can expose in simple ways the data and structure Satori harvests from the Web, and let users populate their text documents and spreadsheets with up-to-date information in just a couple of clicks.

Interactive Chinese Learning App

Presenter: Yan Xia

[Video Abstract | Full Video]

When traveling to China it’s best to know at least a bit of the language. The mobile app called Learn Chinese can help travelers enjoy a better journey. Learn Chinese teaches in an interactive way, by using speech and natural language processing technology. The AI robot teacher corrects the user’s Chinese pronunciation and wording through conversation over various scenarios, such as shopping, seeing a doctor, or having dinner in a restaurant. It’s a more natural way of learning a language, propelled by AI techniques.

Machine Reading Comprehension over Automotive Manual

Presenter: Mahmoud Adada

[Video Abstract]

Maluuba’s vision is to build literate machines. The research team has built deep learning models that can process written unstructured text and answer questions against it. The demo will showcase Maluuba’s machine reading comprehension (MRC) system by ingesting a 400-page automotive manual and answering users’ questions about it in real time. The long-term vision for this product is to apply MRC technology to all types of user manuals, such as cars, home appliances, and more.

Machine Teaching Using the Platform for Interactive Concept Learning (PICL)

Presenter: Alicia Edelman Pelton

[Video Abstract | Full Video]

Building machine learning (ML) models is an involved process requiring ML experts, engineers, and labelers. The demand of models for common-sense tasks far exceeds the available “teachers” that can build them. We approach this problem by allowing domain experts to apply what we call Machine Teaching (MT) principles. These include mining domain knowledge, concept decomposition, ideation, debugging, and semantic data exploration.

PICL is a toolkit that originated from the MT vision. It enables teachers with no ML expertise to build classifiers and extractors. The underlying SDK enables system designers and engineers to build customized experiences for their problem domain. In PICL, teachers can bring their own dataset, search or sample items to label using active learning strategies, label these items, create or edit features, monitor model performance, and review and debug errors, all in one place.

Microsoft Pix

Presenter: Kelly Freed

[Video Abstract | Full Video]

Microsoft Pix helps every photographer take better pictures. Because it incorporates AI behind the lens, it can tweak settings, select the best shots, and enhance them on the fly. It’s designed to help take the guesswork out of getting great photos, so amateur photographers enjoy the moment, instead of struggling to capture it!

Microsoft Translator live

Presenter: Chris Wendt

[Video Abstract | Full Video]

Microsoft Translator live enables users to hold translated conversations across two or more languages, with up to 100 participants participating at the same time using PowerPoint, iOS, Android, Windows and web endpoints. Businesses, retail stores, and organizations around the world need to interact with customers who don’t speak the same language as the service providers, and Microsoft Translator live is an answer to all these needs.

Mobile Directions Robot

Presenter: Ashley Feniello

[Video Abstract | Full Video]

This demo shows our work on a mobile robot that gives directions to visitors. Currently, this robot is navigating Microsoft Building 99, leading people, escorting and interacting with visitors and generally providing a social presence in the building. This robot uses Microsoft’s Platform for Situated Intelligence and Windows components for human interaction, as well as a robot operating system running under Linux for robot control, localization and navigation.

Project InnerEye – Assistive AI for Cancer Treatment

Presenter: Ivan Tarapov

[Video Abstract | Full Video]

Project InnerEye is a new AI product targeted at improving the productivity of oncologists, radiologists, and surgeons when working with radiological images. The project’s main focus is in the treatment of tumors and monitoring the progression of cancer in temporal studies. InnerEye builds upon many years of research in computer vision and machine learning. It employs decision forests (as used already in Kinect and Hololens) to help radiation oncologists and radiologists deliver better care, more efficiently and consistently to their cancer patients.

Project Malmo – Experimentation Platform for the Next Generation of AI Research

Presenter: Katja Hofmann

[Video Abstract | Full Video]

Project Malmo is an open source AI experimentation platform that supports fundamental AI research. With the platform, Microsoft provides an experimentation environment in which promising approaches can be systematically and easily compared, and that fosters collaboration between researchers. Project Malmo is built on top of Minecraft, which is particularly appealing due to its design; open-ended, collaborative, and creative. Project Malmo particularly focuses on Collaborative AI – developing AI agents that can learn to collaborate with other agents, including humans, to help them achieve their goals. To foster research in this area, Microsoft recently ran the Malmo Collaborative AI Challenge, in which more than 80 teams of students worldwide, competed to develop new algorithms that facilitate collaboration. This demo demonstrates the results of the collaborative AI challenge task and shows selected agents and how new tasks and agents can be easily implemented.

Tutorial: Platform for Situated Intelligence

Presenter: Mihai Jalobeanu

Time: 1:15 PM–2:00 PM & 2:15 PM–3:00 PM

Location: Lassen

Engineering general-purpose interactive AI systems that are efficient, robust, transparent and maintainable is still a challenging task. Such systems need to integrate multiple competencies, deal with large amounts of streaming data, and react quickly to an uncertain environment. They often combine human-authored components with machine-learned, non-deterministic components, which further amplifies the challenges. In this technology showcase, we demonstrate a platform under development at Microsoft Research that aims to provide a foundation for developing this class of complex, multimodal, integrative-AI systems. The framework provides a runtime that enables efficient, parallel-coordinated computation over streaming data, a set of tools for visualization, data analytics and machine learning, and provides a chassis for pluggable AI-components that enable the rapid development of situated interactive systems. This technology showcase provided a short introduction and demonstration of various framework aspects. The session ran twice for 45 minutes, starting at 1:15 PM and 2:00 PM to allow participants to also visit other technology showcases.

Zo AI

Presenter: Ying Wang

[Video Abstract]

Zo is a sophisticated machine conversationalist with the personality of a 22-year-old with #friendgoals. She hangs out on Kik and Facebook and is always interested in a casual conversation with her growing crowd of human friends. Zo is an open-domain chatbot and her breadth of knowledge is vast. She can chime into a conversation with context-specific facts about things like celebrities, sports, or finance but she also has empathy, a sense of humor, and a healthy helping of sass. Using sentiment analysis, she can adapt her phrasing and responses based on positive or negative cues from her human counterparts. She can tell jokes, read your horoscope, challenge you to rhyming competitions, and much more. In addition to content, the phrasing of the conversations must sound natural, idiomatic, and human in both text and voice modalities. Zo’s “mind” is a sophisticated array of multiple machine learning (ML) techniques all working in sequence and in parallel to produce a unique, entertaining and, at times, amazingly human conversational experience. This demo shows some of Zo’s latest capabilities and how the team has achieved these technical accomplishments.

Videos

Watch the streamed sessions on demand

Keynotes

AI in the Open World

Fielding AI solutions in the open world requires systems to grapple with incompleteness and uncertainty. This session addresses several promising areas of research in open world AI, including enhancing robustness via…
See more >

Smart Enough to Work With Us? Foundations and Challenges for Teamwork-Enabled AI Systems

For much of its history, AI research has aimed toward building intelligent machines independently of their interactions with…
See more >


Fireside Chat with Harry Shum

Christopher Bishop has a fireside chat with Harry Shum, executive vice president of Microsoft’s Artificial Intelligence (AI) and Research…
See more >

The Interplay of Agent and Market Design

Humans make hundreds of routine decisions daily. More often than not, the impact of our decisions depends on the decisions of others…
See more >


AI, People, and Society

Advances in AI promise great benefit to people and organizations. However, as we push the science of AI forward, we need to consider…
See more >

Model-Based Machine Learning

Today, thousands of scientists and engineers are applying machine learning to an extraordinarily broad range of domains, and over the last…
See more >


Research in Focus

Research in Focus: Machine Reading Comprehension

Deep learning techniques are helping Microsoft researchers develop literate machines: those that can read for…

See more >

Research in Focus: Deep Learning Research and the Future of AI

AI deep learning expert and University of Montreal Professor Yoshua Bengio talks about deep learning—what it is, how…

See more >

Research in Focus: Private AI

For AI to achieve its potential, it requires access to large amounts of often sensitive data, which can pose threats to our privacy. This session, featuring Rich…

See more >

Research in Focus: Project InnerEye – Assistive AI for Cancer Treatment

In this session, Ivan Tarapov demos a prototype application of assistive AI for cancer treatment: helping a radiologist create…

See more >

Research in Focus: AI for Earth

AI for Earth is a new initiative from Microsoft designed to help the planet be more sustainable. This session…

See more >

Research in Focus: Conversational Agents

Microsoft’s Lucy Vanderwende and Bill Dolan and Ohio State’s Alan Ritter talk about deep learning algorithms that help…

See more >

Research in Focus: InfoBots

The next step for search involves InfoBots that know how to find the answer you’re looking for, not just a set of webpages that may—or may not—have those answers. In this session, Microsoft’s Nilesh Bhide and Manish…

See more >

Research in Focus: Transforming Machine Learning and Optimization through Quantum Computing

Quantum computing is in its infancy, but Microsoft’s Krysta Svore and Nathan Wiebe talk about quantum techniques as applied to…

See more >

Research in Focus: AI Experimentation Platform – Project Malmo

This session talks about Project Malmo, a unique AI experimentation platform based on the game Minecraft. Microsoft Research’s…

See more >

Onsite Breakout Sessions

AI for Accessibility: Augmenting Sensory Capabilities with Intelligent Technology

AI for Accessibility: Augmenting Sensory Capabilities with Intelligent Technology

Advances in AI technologies have important ramifications for the development of accessible technologies…

See more >

Integrative-AI

Over the last decade, algorithmic developments coupled with increased computation and data resources have led to advances in well-defined verticals of AI…

See more >

Machine Reading Using Neural Machines

Teaching machines to read, process and comprehend natural language documents and images is a coveted goal in modern AI…

See more >

Learnings from Human Perception

Scientists have long explored the different sensory inputs to better understand how humans perceive the world and control their…

See more >

Conversational Systems in the Era of Deep Learning and Big Data

Recent research in recurrent neural models, combined with the availability of massive amounts of dialog data, have together…

See more >

AI for Earth

Human society is faced with an unprecedented challenge to mitigate and adapt to changing climates, ensure resilient water supplies, sustainably feed…

See more >

Private AI

As the volume of data goes up, the quality of machine learning models, predictions, and services will improve. Once models are…

See more >

Provable Algorithms for ML/AI Problems

Machine learning (ML) has demonstrated success in various domains such as web search, ads, computer vision, natural…

See more >

Social and Emotional Intelligence in AI and Agents

Social signals and emotions are fundamental to human interactions and influence memory, decision-making and wellbeing. As AI…

See more >

AI and Security

In the future, every company will be using AI, which means that every company will need a secure infrastructure that addresses AI security concerns. At the same time, the domain…

See more >

Microsoft Cognitive Toolkit (CNTK) for Deep Learning

Microsoft Cognitive Toolkit (CNTK) is a production-grade, open-source, deep-learning library. In the spirit of…

See more >

Challenges and Opportunities in Human-Machine Partnership

The new wave of excitement about AI in recent years has been based on successes in perception tasks or on domains with limited…

See more >

Transforming Machine Learning and Optimization through Quantum Computing

In 1982, Richard Feynman first proposed using a “quantum computer” to simulate physical systems with exponential speed…

See more >

Technology Showcase

Tech Showcase: Accelerating Research Using Networked FPGAs

Project Catapult connects FPGAs together through a network to create a hyperscale, reconfigurable accelerator…

See more >

Tech Showcase: Bing Visual Search

Visual search, AKA search by image, is a new way of searching for information using an image or part of an image as the query. Similar to text search, which connects…

See more >

Tech Showcase: Cortana, Your Personal Assistant

From ferry schedules to dinner reservations, Cortana is the digital assistant designed to help people get things done…

See more >

Tech Showcase: Custom Vision Service

This demo shows how Custom Vision Service can be applied to many AI vision applications. For example, if a client needs to build a custom image classifier…

See more >

Tech Showcase: Customizing Speech Recognition for Higher Accuracy Transcriptions

Understanding the land cover types and locations within specific regions enables effective environmental conservation…

See more >

Tech Showcase: Deep Artistic Style Transfer: From Images to Videos

This demo demonstrates several applications of Microsoft’s recent work in artistic style transfer for images and videos. One technology, called StyleBank…

See more >

Tech Showcase: DeepFind: Searching within Documents to Answer Natural Language Questions

DeepFindSearching within web documents on mobile devices is difficult and unnatural: ctrl-f searches only for exact…

See more >

Tech Showcase: InfoBots: AI-Powered Conversational QnA Systems

As we move into the world of messaging apps, bots and botification of content, users are starting to move from keyword searches to relying on bots and assistants for their…

See more >

Tech Showcase: Interactive Chinese Learning App

When traveling to China it’s best to know at least a bit of the language. The mobile app called Learn Chinese can help travelers enjoy a better journey. Learn Chinese…

See more >

Tech Showcase: Machine Teaching Using the Platform for Interactive Concept Learning (PICL)

Building machine learning (ML) models is an involved process requiring ML experts, engineers, and labelers. The demand…

See more >

Tech Showcase: Microsoft Pix

Microsoft Pix helps every photographer take better pictures. Because it incorporates AI behind the lens, it can tweak settings…

See more >

Tech Showcase: Microsoft Translator Live

Microsoft Translator live enables users to hold translated conversations across two or more languages, with up to 100 participants…

See more >

Tech Showcase: Mobile Directions Robot

This demo shows our work on a mobile robot that gives directions to visitors. Currently, this robot is navigating Microsoft Building 99, leading people, escorting and interacting…

See more >

Tech Showcase: Project InnerEye – Assistive AI for Cancer Treatment

Project InnerEye is a new AI product targeted at improving the productivity of oncologists, radiologists, and surgeons…

See more >

Tech Showcase: Project Malmo – Experimentation Platform for the Next Generation of AI Research

Project Malmo is an open source AI experimentation platform that supports fundamental AI research. With the…

See more >

Microsoft Research blog