Speakers
Sebastian Serth
Hasso Plattner Institute, University of Potsdam, GermanyChristiane Hagedorn
Hasso Plattner Institute, University in Potsdam, GermanyStart
23/05/2022 - 14:00
End
23/05/2022 - 15:30
How to create programming exercises with automated feedback
Monday 23/05 14:00-15:30h
Outdoor Area B
Abstract
Assessing the coding skills and learning progress in a programming course is almost impossible without code submissions. Other assessment types, such as multiple-choice quizzes, do not cover the full range of skills but can be checked automatically. Writing unit tests for the automated assessment of programming exercises is a manual process. Hence, it is not used intensively for a small group of learners but only in larger e-learning scenarios. We will share insights from an auto-grader designed for MOOCs that we also deployed for lectures at our university during this workshop. In a team exercise, each group will then focus on another aspect of automated code assessments, such as the results of the code execution, the error handling, the coding style, or the performance. Each team will reflect learners’ and instructors’ requirements and draft a suitable assistance feature for the idenified requirements. After presenting the results, a guided discussion will compare the different approaches, advantages, and disadvantages. Workshop participants will gain an overview of available tools, e.g., unit tests, syntax tree parsing, or static program analysis, and receive some guidelines for creating their own test cases.
Needs Analysis
The workshop results and the participants’ feedback will be used to create a reusable testing framework for assessing practical coding exercises for online courses and university lectures. Here, the goal is to reduce the manual effort when creating exercises and provide a guideline for reference. We will discuss strategies for automated grading and a suitable sourrinding for the usage of automated systems in teaching scenarios. Throughout the workshop, we will discuss how instructors can include static program analysis and how TEL researchers can benefit from these automated systems. Additionally, we provide insights on various metrics we usually analyze for new assistance features tested in our auto-grader. Future discussions might evolve on the need for specific test routines and support for teachers to create tests more quickly for many exercises.
Learning Objectives
This workshop has the following learning objectives: (1) Participants will learn more about the different types of automated feedback including the corresponding usage scenarios. (2) They will increase their understanding of the advantages and limitations of auto-graders for programming courses. (3) Further, we aim to provide a hands-on experience for creating robust and supportive code assessment with automated feedback and (4) will outline the next steps to take for introducing automated tests at their local universities.
Pre-activities
This allows participants to look into the general system, try auto-grading themselves (in a popular programming language), and build a common understanding for the workshop. Based on experiences brought by participants, we could start with a discussion about the advantages and disadvantages of the exercises experienced and highlight difficulties that instructors might have when creating appropriate tests for exercises. Thus, it would be ideal to provide a link and a short example beforehand but can also accommodate participants joining spontaneously and without looking into the prepared material beforehand. While general programming skills are preferred, prior experience with code testing is beneficial but not required.
Session Description
- Introduction (15min):
- How does programming education in computer science look like, and what is the problem with a manual review of exercises?
- What has been achieved (with our auto-grader) so far?
- What are the differences between the MOOC context and traditional classes concerning help requests and submissions?
- Form thematic groups (5 min) based on prepared index cards, e.g.:
- What could be assessed with unit tests?
- Which applications are suitable for static program analysis?
- How could a syntax tree be used for assessment?
- What can only be achieved with a manual review, and how could the effort be minimized for instructors?
- In which regard are two submissions comparable and how to pick the “better” one?
- How could instructors be supported in test case creation for their exercises?
- Identify and understand the task with given examples (5 min). Assign roles to team members to focus either on the instructor or student role and their needs.
- Work on the team exercise (30 min): What to expect from the automated assessment based on the given role (instructor/student)? What is helpful and what is not? How to deal with edge cases?
- Present team results (15 min): What do you expect to work well, which problems did you encounter?
- Discuss results (advantages and disadvantages) and rank presented methods based on a cost-benefit ratio (12 min)
- Closing remarks and feedback (8 min): What did you learn, and how do you feel about automated assessment? Which open questions are still open?
Post-activities
Participants interested in the topic may continue with two different kinds of resources: A variety of scientific publications describe the usage of auto-graders and the potential effects for learners. Additionally, more practical-oriented guides provide concrete steps to implement tests and try out the systems under discussion. As part of the workshop (or afterward), we can provide some starting points and suitable reading material for both categories.