Introduction to Learning to Teach
AutoML, which aims to automate a machine learning system, has attract a lot of attention in research community and made a lot of noise in industry and media . Different approaches have been proposed for AutoML, such as meta learning and learning to learn. Although they achieved certain success, when viewed from the perspective of human cognitive process, current ML and AutoML research ignore an important factor: teaching. Different from previous works, we propose a new framework called “learning to teach” (L2T) that tries to achieve more comprehensive, automatic and realistic teaching for ML systems. An ML system typically learns from a given dataset D, via optimizing a certain loss L, within a particular hypothesis (function) space F. L2T covers a wide spectrum of important problems largely overlook by the community:
1. Data teaching, which aims to find the best training data D for the task at hand. Data plays a similar role to the teaching materials such as textbooks in human teaching.
2. Loss function teaching, which aims to design the most appropriate loss function L to be optimized by the student. As an analogy, the loss corresponds to the examination criteria for the student in human teaching.
3. Hypothesis space teaching, which aims to identify the hypothesis space F that the student model belongs to. This also has a good analogy in human teaching: in order to solve a mathematical problem, middle school students are only taught with basic algebraic skills whereas undergraduate students are taught with calculus.
Lijun Wu, Fei Tian, Yingce Xia, Tao Qin, Jianhuang Lai, and Tie-Yan Liu, Learning to Teach with Dynamic Loss Functions, NIPS 2018.
Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu, Neural Architecture Optimization, NIPS 2018. [code]
Yang Fan, Fei Tian, Tao Qin, Xiangyang Li, and Tie-Yan Liu, Learning to Teach, ICLR 2018. [Chinese article]