Large language models for feedback generation

JTELSS logo large
Large language models for feedback generation Friday 09/06 10:30-12:00h Workshop Space A Abstract This workshop provides an overview of the recent advances in AI methodology for content scoring and feedback generations, specifically natural language processing and transformer language models such as BERT or GPT. These models have revolutionized the field

Speakers

Sebastian Gombert
DIPF, Germany
Daniele Di Mitri
DIPF, Germany
neutral portrait picture
Lukas Menzel
Goethe University Frankfurt, Germany

Start

09/06/2023 - 10:30

End

09/06/2023 - 12:00

Large language models for feedback generation

Friday 09/06 10:30-12:00h
Workshop Space A
Abstract

This workshop provides an overview of the recent advances in AI methodology for content scoring and feedback generations, specifically natural language processing and transformer language models such as BERT or GPT. These models have revolutionized the field by allowing end-to-end modeling of linguistic phenomena, resulting in improved text classification and generation capabilities. The workshop highlights the educational use cases of these models, such as automated coding of assessments and the generation of feedback, either from templates or from scratch. The workshop aims to explore these possibilities and related areas, including textual entailment classification, content scoring, and ethical implications of these technological developments. The discussion will cover the current research, technologies, and open questions in these areas.

 

Needs Analysis

In the last decade, the methodology of AI, and natural language processing, has made rapid advances. Ten years ago, most researchers still relied on feature-based modeling using statistical models and hand-crafted rules. Still, since then, neural networks and especially transformer language models such as GPT have taken the field by storm and allowed for rapid advances. This also allows for new educational use cases. Previously, providing learners with highly informative feedback was often impossible due to limiting factors such as teacher capacities. With the help of natural language processing, we can automate the scoring of open-ended responses and provide feedback. First research even explored the usage of generative transformers to generate feedback from scratch. As the approaches can be used in many different fields, is important to provide interested TEL researchers with an introduction.

 

Learning Objectives

In this workshop, we will explore these possibilities. We will look at current research, technologies, and open questions. This involves the classification of textual entailment as a prerequisite for content scoring, an overview of state-of-the-art methods for this purpose, and an overview of methods for generating and augmenting feedback. Moreover, we will have a look at research prototypes from our lab. After presenting these contents, we will conclude the workshop with an open discussion round, in which we will address technical questions and the ethical aspects and implications of these technological developments. To summarize:

  • The participants will get an overview over state-of-the-art NLP technologies and approaches for scoring responses and giving task-level feedback as well as the theoretical basics of the field.
  • The participants will interact with practical prototypes using the discusses technologies and reflect on them.

 

Pre-activities

Participants should have a basic understanding of programming or statistical modeling.

 

Session Description

The workshop is divided into four blocks:

  • 5 mins: Kick-Off presentation, getting to know each others backgrounds.
  • 15 mins: theoretical presentation on formative assessment and feedback
  • 30 mins: technical presentation -> transformer language models for educational scoring and feedback generation; the Huggingface transformers framework for implementing them; open problems.
  • 20 mins: interaction with prototypes of feedback systems.
  • 20 mins: open discussion -> ethics, open problems, questions.