Interaction with large unstructured datasets is difficult because existing approaches, such as keyword search, are not always suited to describing concepts corresponding to the distinctions people want to make within datasets. One possible solution is to allow end-users to train machine learning systems to identify desired concepts, a strategy known as interactive concept learning. A fundamental challenge is to design systems that preserve end-user flexibility and control while also guiding them to provide examples that allow the machine learning system to effectively learn the desired concept. This paper presents our design and evaluation of four new overview-based approaches to guiding example selection. We situate our explorations within CueFlik, a system examining end-user interactive concept learning in Web image search. Our evaluation shows our approaches not only guide end-users to select better training examples than the best-performing previous design for this application, but also reduce the impact of not knowing when to stop training the system. We discuss challenges for end-user interactive concept learning systems and identify opportunities for future research on the effective design of such systems.