Abstract

End-user interactive concept learning is a technique for interacting with large unstructured datasets, requiring insights from both human-computer interaction and machine learning. This note re-examines an assumption implicit in prior interactive machine learning research, that interaction should focus on the question “what class is this object?”. We broaden interaction to include examination of multiple potential models while training a machine learning system. We evaluate this approach and find that people naturally adopt revision in the interactive machine learning process and that this improves the quality of their resulting models for difficult concepts.