Abstract

Learning to code has become so popular that it is now almost the default that beginners will encounter coding on a website, along with thousands of others at the same time. Providing feedback and recognition in the face of such increasing numbers is a challenge that can be met by the automated test generation. Through automation and access to massive amounts of data, we show that the frequency, coverage, accuracy and personalization of feedback can be improved over earlier systems. Recognition can also be made automatic by using a gaming model. Based on the Code Hunt programming Game, we have developed and tested a system of test-driven synthesis (TDS) and produced results that show that we can accurately produce sensible feedback. Moreover the feedback increases engagement in continuing with the difficult task of learning to code. We also report on the effect of recognition of progress during the game and during contests.