Machine Teaching Group

Established: April 28, 2015

The focus of the Machine Teaching Group is to make the process of training a machine easy, fast and universally accessible. This multi-disciplinary challenge lies at the intersection of Machine Learning, Human-Computer Interaction, Visualization and Engineering. The Machine Teaching Group is one of the Machine Learning Groups in the Microsoft Research Redmond lab.

Group Contact: patrice@microsoft.com

Vision

Humans are good at tasks that require common-sense understanding, but they are poor at processing those tasks fast and cheaply. Computers, in contrast, have these two abilities reversed: computers typically perform relatively poorly at common-sense understanding tasks, but they can process each task at relatively high speed and low cost. Machine Learning (ML) holds the promise of getting the best of both worlds: we can train computer models that approximate human performance on common-sense tasks, while maintaining high speed and low cost.

The demand for ML models far exceeds the supply of “machine teachers” that can build those models. Categories of common-sense understanding tasks that we would like to automate with computers include commands, feedback, requests, alarms and personalized interests. For each such category, we can imagine hundreds or thousands of modeling tasks that are domain and/or application specific. For example, we might be interested in building a model to understand voice commands for controlling a television. The solution to the problem of high demand is to make teaching machines easy, fast, and universally accessible. We refer to this approach as Machine Teaching.

Surprisingly, Machine Teaching is neglected by ML practitioners both in the academic and the industrial communities. The largest fraction of the ML community is focused on creating new algorithms to improve the accuracy of the “learners” on given data sets. The ML challenge is well formulated and there are plenty of data sets and competitions (e.g. Image Net and deep learning). Machine teaching, in contrast to machine learning, is focused on the efficacy of the teachers, given the learners. The learning algorithm is an interchangeable commodity – a compiler – which takes a training set as input and produces a model. The metrics of machine teaching measure performance relative to human costs, such as productivity, interpretability, accessibility, maintainability, scaling with the number of contributions, robustness to human errors, and so on. Machine Teaching is a paradigm shift from machine learning, akin to how other fields like Systems and programming language have shifted from optimizing for performance to optimizing for productivity. For instance, concepts such as modularity, correctness proofs, functional programming, layered protocols, and immutability, all optimize productivity over performance. If machine learning is a form of compilation (creating a model from a training set), Machine Teaching is the ML equivalent to Systems (building the ML platforms for interactive experimentation, deployment, and federation of large collections of models) and programming languages (human-machine interfaces for data exploration, labeling, featuring, and debugging).

A Multi-disciplinary challenge: The discipline of Machine Teaching lives at the intersection of the Visualization, HCI, ML, Systems and Engineering fields. Visualization and HCI expertise are critical because the teaching metrics measure human performance. ML expertise is required to meet generalization targets under strict constraints resulting from teacher metrics. Systems and Engineering expertise are equally important because scaling with the number of contributions is an explicit goal fulfilling a business need. Machine teaching faces many difficult research challenges that currently do not have satisfying solutions: featuring (HCI, Visualization, ML, Systems), exploration (HCI, Visualization, ML, Systems), partial and approximate labeling (HCI, Visualization), model inter-dependency tracking (HCI, System), scaling with adversarial models (ML, Systems), transferability (HCI, ML), debugability (HCI, Visualization, ML), maintainability (HCI, Visualization, ML, Systems), etc.

Whereas progress in the ML discipline can yield single digit improvements to our ML capabilities, we believe there are orders of magnitude gains to be had by democratizing machine teaching and combining the expertise of Visualization, HCI, ML, Systems, and Engineering.

Visiting Researchers

Jerry Zhu (University of Wisconsin-Madison)

Alumni

  • Aparna Lakshmiratan
  • Denis Charles
  • David Grangier
  • Leon Bottou

People

Publications

2016

2015

2014

2012

Videos

Projects

Platform for Interactive Concept Learning (PICL)

Established: April 28, 2015

Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The system leverages big data to find examples that maximize the training value of its interaction with the teacher. Building classifiers and entity extractors is currently an inefficient process involving machine learning experts, developers and labelers. PICL enables teachers with…

Visualization for Machine Teaching

Established: June 17, 2008

A large body of human-computer interaction research has focused on developing metaphors and tools that allow people to effectively issue commands and directly manipulate informational objects. However, with the advancement of computational techniques such as machine learning, we now have the unprecedented ability to embed 'smarts' that allow machines to assist and empower people in completing their tasks. We believe that there exists a computational design methodology which allows us to gracefully combine automated services with…