Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

Creating AI glass boxes – Open sourcing a library to enable intelligibility in machine learning

May 10, 2019 | By Rich Caruana, Senior Principal Researcher; Harsha Nori, Data Scientist ; Samuel Jenkins, Data Scientist; Paul Koch, Principal Research Software Engineer; Ester de Nicolas, Product Marketing Manager

When AI systems impact people’s lives, it is critically important that people understand their behavior. By understanding their behavior, data scientists can properly debug their models. If able to reason how models behave, designers can convey that information to end users. If doctors, judges and other decision makers trust the models that underpin intelligent systems, they can make better decisions. More broadly, with fuller understanding of models, end users might more readily accept the products and solutions powered by AI, while growing regulator demands might be more easily satisfied.

In practice, achieving intelligibility can be complex and highly dependent on a host of variables and human factors, precluding anything resembling a “one-size-fits-all” approach. Intelligibility is an area of cutting-edge, interdisciplinary research, building on ideas from machine learning, psychology, human-computer interaction, and design.

Researchers at Microsoft have been working on how to create intelligible AI for years, and we are extremely excited to announce today that we are open sourcing under MIT license a software toolkitlnterpretML – that will enable developers to experiment with a variety of methods for explaining models and systems. InterpretML implements a number of intelligible models—including Explainable Boosting Machine (an improvement over generalized additive models ), and several methods for generating explanations of the behavior of black-box models or their individual predictions.

By having an easy way to access many intelligibility methods, developers will be able to compare and contrast the explanations produced by different methods, and to select methods that best suit their needs. Such comparisons can also help data scientists understand how much to trust the explanations by checking for consistency between methods.

We are looking forward to engaging with the open-source community in continuing to develop InterpretML. If you are interested, we warmly invite you to join us on GitHub.

Up Next

Artificial intelligence

Advancing Human-Centered AI

We’re excited about the formal launch of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) on Monday, March 18th. We resonate with Fei-Fei Li, John Etchemendy, and other leaders at Stanford on the promise of taking an interdisciplinary approach to AI, on a pathway guided by insights about human intelligence, and one that carefully considers […]

Eric Horvitz

Technical Fellow and Director, Microsoft Research Labs and AI

hands holding holographic brain node image

Artificial intelligence, Human language technologies

Guidelines for human-AI interaction design

The increasing availability and accuracy of AI has stimulated uses of AI technologies in mainstream user-facing applications and services. Along with opportunities for infusing valuable AI services in a wide range of products come challenges and questions about best practices and guidelines for human-centered design. A dedicated team of Microsoft researchers addressed this need by […]

Saleema Amershi

Principal Researcher

Artificial intelligence

Creating better AI partners: A case for backward compatibility

Artificial intelligence technologies hold great promise as partners in the real world. They’re in the early stages of helping doctors administer care to their patients and lenders determine the risk associated with loan applications, among other examples. But what happens when these systems that users have come to understand and employ in ways that will […]

Besmira Nushi

Senior Researcher