Optimally designing the location of training input points (active
learning) and choosing the best model (model selection) are two important components of supervised learning and have been studied extensively. However, these two issues seem to have been investigated separately as two independent problems. If training input points and models are simultaneously optimized, the generalization performance would be further improved. We call this problem active learning with model selection. In this talk, I introduce a new approach called ensemble active learning. The proposed approach compares favorably with alternative methods such as iteratively performing active learning and model selection in a sequential manner.