Towards Interpretable Educational AI: Empowering Learners' decisions in TEL with Explainable AI

JTELSS logo large
Towards Interpretable Educational AI: Empowering Learners' decisions in TEL with Explainable AI Friday 17/05 10:30-12:00h Workshop Space B Needs Analysis In the evolving landscape of TEL, the integration of AI holds immense promise for supporting and enhancing learning experiences. However, as intelligent algorithms are being developed for TEL, it is

Speakers

Hasan-Abu-Rasheed
Hasan Abu-Rasheed
University of Siegen, Germany
Dr. Christian Weber profile image
Christian Weber
University of Siegen, Germany

Start

17/05/2024 - 10:30

End

17/05/2024 - 12:00

Towards Interpretable Educational AI: Empowering Learners’ decisions in TEL with Explainable AI

Friday 17/05 10:30-12:00h
Workshop Space B
Needs Analysis

In the evolving landscape of TEL, the integration of AI holds immense promise for supporting and enhancing learning experiences. However, as intelligent algorithms are being developed for TEL, it is crucial to address the inherent challenges posed by black-box AI systems. The lack of transparency and interpretability of AI algorithms can hinder learners’ understanding of machine-generated predictions and recommendations, and thus limit their ability to make informed decisions about the personalized input they receive from these algorithms. This workshop aims to support TEL-PhD candidates to address and approach bridging this gap by emphasizing the role of explainable AI in the educational domain. Through understanding types of explainability, and the different approaches to transform their intelligent algorithms into interpretable ones, PhD candidates can develop solutions that promote transparency and empower learners. This, in turn, addresses the fundamental need to place learners at the center of technological advancements in education.

 

Learning Objectives

In this workshop, participants will:

  • Gain a comprehensive understanding of the role of explainable AI in educational settings.
  • Learn strategies to incorporate transparency and interpretability into AI-based educational solutions, to promote learner’s agency and decision-making.
  • Explore techniques for evaluating the explainability of AI models in TEL applications.

Planned workshop outcomes:

  • Enhanced awareness of the technical, ethical, and pedagogical implications of black-box and open-box AI in education.
  • Ability to integrate explainability considerations and methods into the design and implementation of TEL solutions.
  • Ability to evaluate the participants’ existing AI models in terms of their transparency and interpretability
  • A deeper understanding of tools that participants can use to develop their systems and algorithms as learner-centric AI technologies in TEL.
Pre-activities

No prerequisites or preparations are needed for this workshop.

 

Session Description

This workshop is an interactive, group-based, session designed to engage participants in critical discussions and practical exercises related to explainable AI in TEL and their own intelligent systems. The session will include three parts:

  1. A presentation (35min) that covers the following topics:
    1. an overview of transparency and interpretability in educational AI, highlighting real-world examples, limitations, and case studies.
    2. various techniques and methodologies for achieving explainability in AI models, including model-agnostic approaches, interpretable machine learning algorithms, and visualization techniques.
    3. ethical considerations and pedagogical implications of transparent AI systems in educational contexts.
  2. A practical exercise (35min), in which participants will apply their newfound knowledge in groups. Each group will design an interpretable AI solution for a pre-defined educational scenario. Participants will then showcase their solutions and get peer feedback from other groups.
  3. A reflection segment (20min), in which the participants discuss how the concepts and strategies they learned can be implemented to support their own PhD research, thus enhancing its explainability and transparency for the learners, or other stakeholders in their target groups.