Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes
- Yiqiao Jin ,
- Xiting Wang ,
- Yaru Hao ,
- Yizhou Sun ,
- Xing Xie
AAAI Conference on Artificial Intelligence (AAAI) |
In this paper, we move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the number of data points and the Moreover, we propose four principles for effective prototype fine-tuning towards the optimal solution. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.