Many learning algorithms generate highly complex models that are difficult for a human to interpret, debug, and extend. In this paper, we address this challenge by proposing a new learning paradigm called correctable learning, where the learning algorithm receives external feedback during learning about which data examples are incorrectly learned. We propose a simple and efficient correctable learning algorithm which learns local models for different regions of the data space. Given an incorrect example, our method samples data in the neighborhood of that example and learns a new, more correct local model over that region. We define a set of metrics which measure the correctability and performance of a learning algorithm. Our experiments over multiple regression, classification and ranking datasets show that our correctable learning algorithm offers significant improvements over state-of-the-art techniques.