May 28, 2020 - 14:00
May 28, 2020 - 15:30
Assessing the coding skills and learning progress in a programming course is almost impossible without code submissions. Other assessment types, such as multiple-choice quizzes, do not cover the full range of skills but can be checked automatically. Writing unit tests for automated assessment of programming exercises is a manual process and error-prone to individual edge cases. Hence, it is mostly not used for a small group of learners but only in e-learning scenarios.
During this workshop, we will introduce participants to the MOOC-proven auto-grader we recently used in a local lecture series including the final exam. We will share experiences made throughout the Master course and outline the differences to MOOCs concerning the introduction of the system, the test cases, and feedback messages. In a team exercise, each group will focus on another aspect of automated code assessments, such as the results of the code execution, the error handling, the coding style or the performance. Each team will reflect the requirements of learners and instructors and especially focus on inspecting the code quality.
After the presentation of results, a guided discussion will compare the different approaches, their advantages, and disadvantages. Workshop participants will gain an overview of available tools, e.g. unit tests, syntax tree parsing, or static program analysis and receive some guidelines for creating own test cases for programming exercises.