The demand for machine learning (ML) models far exceeds the supply of “machine teachers” that can build those models. Categories of common-sense understanding tasks that we would like to automate with computers include interpreting commands, feedback, requests, alarms, and personalized interests. For each such category, there are hundreds or thousands of modeling tasks. For example, we might be interested in building a model to understand voice commands for controlling a television. One solution to the increasing demand is to make teaching machines easy, fast, and universally accessible.
A large segment of the ML community is focused on creating new algorithms to improve the accuracy of the “learners” (ML algorithms) on given data sets. The machine teaching (MT) discipline, in contrast to machine learning, is focused on the efficacy of the teachers, given the learners. The metrics of machine teaching measure performance relative to human costs, such as productivity, interpretability, scaling (with complexity or contributors), and robustness.
Machine teaching is a paradigm shift from machine learning, akin to how other fields like language programming have shifted from optimizing performance to optimizing productivity. Traditional ML hasn’t looked at many areas of productivity in MT. For instance, teachers often do not fully understand their problems at the onset. Concept definitions, schemas, and labels change as new islands of rare positives are discovered or when teachers simply change their mind. Evolving decomposition, leading to new features and training of submodels, is an inherent part of the teaching process.
Because the discipline of machine teaching lives at the intersection of the human-computer interaction (HCI), ML, visualization, systems, and engineering fields, it requires collaboration and cross-discipline thinking. The goals of machine teaching research, languages, and tools include improving interpretability, sharing, interchangeability of ML algorithms, and interchangeability of teachers.