2nd Int’l Code Hunt Workshop on Educational Software Engineering (CHESE 2016)

2nd Int’l Code Hunt Workshop on Educational Software Engineering (CHESE 2016)

Summary

Software engineering for education focuses on developing technologies that make programming, testing and analysis more accessible to students. This workshop explored testing through gaming, which is popular with students, and can produce data worthy of analysis. Code Hunt is an industrial strength programming game which is now open in the community and available for research.

Two of the backbones of software engineering are programming and testing. Both of these require many hours of practice to acquire mastery. To encourage students to put in these hours of practice, educators often employ the element of fun. Generally, this involves setting engaging assignments which emphasize the visual, audio, mobile and social world in which the students now live. However, a common complaint in second or third year is that “students can’t program” which is usually interpreted as meaning they are not able to produce code readily for fundamental algorithms such as read a file or search a list. Recruiters in industry are famous for requiring applicants to write such code on the spot. Thus there is a dichotomy: how to maintain the self-motivation of students to practice coding skills, and at the same time focus on core algorithmic problems.

An answer is to use the challenge of a game. Games are everywhere these days, and the motivation to score, do better and complete the game is very high. We are familiar with the concept of playing against the computer, and the sense of achievement that is acquired when goals are reached or one wins. Winning is fun, and fun is seen as a vital ingredient in accelerating learning and retaining interest in what might be a long and sometimes boring journey towards obtaining a necessary skill.

The aim of the workshop was to act not only as a forum for the exchange of ideas, but also as a vehicle to stimulate, deepen, and widen partnership between software engineering and education fields internationally.

The workshop paid special attention to the open source Code Hunt data (players’ playing history) released by Microsoft Research.

Agenda

Time Session Speaker
9:00 – 9:15
Welcome by program chairs
Rishabh Singh and Chang Liu
9:15 – 10:30
Computer-Aided Education
Sumit Gulwani, Microsoft Research
10:30 – 11:00
Break
11:00 – 12:00
Creating Code Hunt Puzzles and Contests
Nikolai Tillmann, Judith Bishop, Peli de Halleux, Nigel Horspool – Code Hunt Team
12:00 – 12:30
Preliminary Analysis of Code Hunt Data Set from a Contest
Pierre McCauley, Brandon Nsiah-Ababio, Joshua Reed, Faramola Isiaka and Tao Xie
12:30 – 2:00
Lunch
2:00 – 2:45 Apex: Automatic Programming Assignment Error Explanation Xiangyu Zhang, Purdue University
2:45 – 3:30 Personalized Feedback Generation by Clustering and Verification Aditya Kanade, Indian Institute of Science
3:30 – 4:00
Break
4:00 – 4:30
Automatic Programming Error Class Identification with Code Plagiarism-Based Clustering
Sébastien Combéfis and Arnaud Schils
4:30 – 5:30 Panel: Code Hunt for me – successes and challenges Judith Bishop, Alexey Kolesnichenko, Willem Visser, Rishabh Singh and Chang Liu
5:30
Close

 

Organizing Committee

Steering Committee

Program Chairs

Program Committee

Call for Submissions

Focus

This workshop’s intent was to build up a specific part of the research community of educational software engineering around testing using gaming. Code Hunt is the most available tool for this research, but papers addressing other platforms, systems and tools were also welcome. Topics included, but were not limited to:

  • theory and practice of testing in education
  • the relationship between testing and gaming
  • approaches to providing hints
  • analysis and visualization of student data
  • challenges of sharing and re-using data
  • challenges provided by different programming languages
  • experience reports on playing Code Hunt games
  • constructing and analyzing Code Hunt games
  • challenges of white box testing.

Types of submissions

The workshop solicits regular papers (6 pages) and position statements or demo reports (2 pages). The submissions will be peer-reviewed by at least three program committee members. The workshop organizers will make acceptance decision based on the reviews provided by the program committee members. COI will be carefully handled during the reviewing process. The submission should be made via the EasyChair website.

Papers will appear in the ACM Digital Library. For this reason, submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this symposium. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

Papers must be prepared in ACM conference format. All submissions must be in English. The submissions should list the paper authors recognizably not anonymously (i.e., FSE does not use double blind reviews). Submissions that do not adhere to these guidelines or that violate formatting will be declined without review.