Speakers
Doris Kristina Raave
University of Tartu, EstoniaEric Roldan Roa
ScaDS.AI, GermanyJuan Carlos J. Ramos Martinez
ScaDS.AI Leipzig, GermanyStart
End
The Sandbox Student: Engineering Productive Friction for High-Stakes Relational Training
📅 Thursday 21/05 14:00-15:30h
📍 Workshop Space A
🔎 Needs Analysis
“The Sandbox Student” uses AI as a relational agent capable of modelling resistance while remaining modelable through effective scaffolding. By engineering “productive friction,” we create a safe-to-fail laboratory for training the human side of teaching (e.g., patience and de-escalation). Hence, this workshop aligns with TEL by merging Human-AI Interaction with Critical Reflective Practice by proposing LLMs as reflective mirrors for practising high-stakes human interactions. Participants compare soft-coded personas (emergent AI behaviour) with hard-coded social logic (rule-based constraints), allowing them to analyse the limitations of predictive modelling in social interaction. This addresses a critical question in TEL: how to design replicable, data-rich simulations for non-linear human behaviours. PhDs will discuss the ethics of emotional simulation, weighing the benefits of a scalable, faithful training tool against the ethical costs of oversimplifying human reality.
📒 Session Description
1. Participants interact individually with two distinct AI student models: Hard-Coded (rigid, rule-based logic) and Soft-Coded (emergent, non-linear behaviour). The goal is to experience “productive friction” firsthand and identify where each model succeeds or fails in mimicking human resistance.
2. Participants swap interaction logs to debate the fidelity gap. (e.g., did the soft-coded AI feel true to life? Was it consistent across interactions? Was the hard-coded AI too robotic?)
3. A deep-dive discussion on the ethics of emotional modelling. We will weigh the benefits of scalable, accessible training tools against the “ethical cost” of using automated systems that might oversimplify complex human psychology.
4. In groups, we tackle the central design challenge: how do we strike a balance between Scalability (the predictability needed for automated feedback) and authenticity (the non-linear “messiness” of a real human)? We will brainstorm hybrid models for future simulations.
💡 Learning Objectives
By the end of the workshop, you will be able to:
- Formulate criteria for selecting between deterministic (hard-coded) and probabilistic (soft-coded) models based on specific educational goals, balancing the need for replicable data with the need for human-like unpredictability.
- Evaluate the epistemological limits of using predictive data to represent human behaviour, identifying what is lost when we translate a non-linear social interaction into a machine-readable format.
- Conduct an ethical audit of automated training systems, evaluating the psychological “cost” of reducing human traits to algorithms and identifying where simulation ends and oversimplification begins.


