Abstract

In computer-aided education, the goal of automatic feedback is to provide a meaningful explanation of students’ mistakes. We focus on providing feedback for constructing a deterministic finite automaton (DFA) that accepts strings that match a described pattern. Natural choices for feedback are binary feedback (correct/wrong) and a counterexample of a string that is processed incorrectly. Such feedback is easy to compute but might not provide the student enough help. Our first contribution is a novel way to automatically compute alternative conceptual hints. Our second contribution is a rigorous evaluation of feedback with 377 students. We find that providing either counterexamples or hints is judged as helpful, increases student perseverance, and can improve problem completion time. However, both strategies have particular strengths and weaknesses. Since our feedback is completely automatic it can be deployed at scale and integrated into existing MOOCs.