1. Towards Capturing Teacher Agency Manifestations within Orchestration Activities: A Conceptual-Analytical Framework

Víctor Alonso-Prieto, Yannis Dimitriadis, Luis P. Prieto, Gustavo Zurita and Claudio Álvarez

Abstract: Teacher agency is critical in understanding the adoption and impact of TEL innovations that affect instructional practices, especially those involving AI technologies. Despite its importance, few analytical frameworks operationalize how technologies mediate teacher agency within orchestration (i.e., the coordination of multiple learning activities across social levels). This paper introduces the TAMOA (Teacher Agency Manifestations within Orchestration Activities) framework, which integrates constructs from ecological and cognitive agency models with orchestration dimensions in TEL. TAMOA enables the identification of both observable actions and reflective processes through which teachers exert agency in the design, management, awareness, and adaptation of technology-mediated learning activities. We illustrate the framework’s utility through a case study involving ethics training in higher education, supported by a collaborative learning platform. Our analysis reveals how TAMOA facilitates the interpretation of teacher agency manifestations and informs future inquiry into the role of agency in this particular TEL system. This sort of analysis can be helpful to design TEL (especially, intelligent) systems in a way that respects practitioner’s agency, which has been related to increased satisfaction and well-being.


2. Exploring Human-AI Collaboration in Flipped Learning Design: A Comparative Study of Generative AI and Pedagogical Patterns

Shatha N. Alkhasawneh and Davinia Hernández-Leo

Abstract: CThe flipped learningmodel has emerged as an innovative approach to fostering active learning and engagement. However, its implementation is often hindered by challenges such as addressing learners’ gaps in prior preparation, feedback exchange, and team regulation. This study explores the integration of Generative Artificial Intelligence (GenAI), specifically ChatGPT, a GPT-4-based conversational agent, and the FLeD tool—a learning design platform utilizing predefined pedagogical patterns—to support educators in designing flipped classrooms while addressing these gaps. Through a hands-on workshop with 18 educators and researchers in higher education, participants engaged in the design of learning scenarios specifically for flipped classrooms, utilizing GenAI for brainstorming, the FLeD tool’s pedagogical patterns, and a combined approach. A qualitative and quantitative analysis of participants’ experiences revealed that the integrated use of GenAI and pedagogical patterns offers the most valuable support—combining structure with creative adaptability. While educators appreciated the flexibility and contextual relevance of the combined approach, they emphasized the need for better alignment with real-world curricula. The findings highlight practical implications for enhancing Human-AI collaboration and supporting teacher-led innovation in learning design. Suggestions for enhancing the FLeD tool with GenAI include improved interactivity, structured prompts, and tailored responses to support diverse educational needs.


3. Partnering with AI: A Pedagogical Feedback System for LLM Integration into Programming Education

Niklas Scholz, Manh Hung Nguyen, Adish Singla and Tomohiro Nagashima

Abstract: Feedback is essential for effective learning, yet providing timely, pedagogically-sound feedback remains challenging. With the rise of large language models (LLMs), research has turned to automated feedback in programming education. However, prior work often overlooks key feedback adaptation criteria, such as student performance. We present a novel multi-agent LLM feedback framework, derived from established feedback models and input from school teachers. We implemented a learning platform for Python programming with LLM-based feedback based on the framework and evaluated its effectiveness with eight computer science teachers. Results show that teachers considered our feedback pedagogically sound, comprehensive, and effective in supporting student learning. However, we also found major challenges, including the adaptation of feedback to classroom contexts, which underscores the importance of involving human teachers in the feedback-giving process.


4. Supporting Learning Design for Sustainable Development Using Large Language Models

Patrick Ocheja, Shatha Nawaf Alkhasawneh, Emily Theophilou, Hiroaki Ogata and Davinia Hernandez-Leo

Abstract: This study explores the integration of Large Language Models (LLMs) into the learning design process of Education for Sustainable Development within Project-Based Learning (PBL) frameworks. Using the ABPxODS platform as the experimental setting, Mirai AI, a conversational agent providing educators with context-aware scaffolding, real-time feedback, and adaptive support for designing sustainability-aligned learning scenarios, was designed and evaluated. A controlled experiment involving 16 educators was conducted to assess the impact of artificial intelligence assistance on project design efficiency and quality. The results indicate that such assistance improves alignment with the United Nations Sustainable Development Goals and enhances design quality, particularly for educators with intermediate experience in PBL. However, novice educators encountered usability challenges, and automated evaluations performed by artificial intelligence demonstrated reliability limitations. This work reveals that AI effectiveness depends critically on educator experience levels while highlighting significant limitations in AI-based educational assessment reliability.


5. Human-Agent Interaction and Collaboration in Education: A Review and Future Research Prospects

Wenting Sun

Abstract: With the rapid advancement of Generative Artificial Intelligence (AI) tools, human-agent interaction and collaboration in education have become indispensable. However, these interactions face significant challenges due to the complexity of integrating AI agents into educational contexts. To enhance our understanding of human-agent interaction and collaboration in education, we conducted a systematic review following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) workflow. This review maps the landscape of research on human-agent interaction in education, including agent types and roles, teaching modes, variables of interest, learning theories, data collection and analysis methods, and interaction patterns. Our findings indicate that the development of human-agent interaction in education is still in its early stages, with a particular focus on disembodied and virtual agents acting as assistants or tutors in one (agent)-to-one (learner) learning scenarios, especially in language learning and computer science. Key areas of interest include learning performance, behaviours, and engagement. The primary data sources are human-agent conversations, questionnaires, interviews, and log data, with content analysis often combined with other methods to analyse process-related data. Notable interaction patterns include task specific aspects transition, general learning experience of human-agent interaction, hints seeking behaviour transition, human-agent engagement and learning performance. Despite these advancements, significant gaps remain in understanding the dynamics of human-agent interaction, such as the dynamic interaction process, complex problem-solving strategies, scalability, and higher-order thinking. This review highlights the need for further research to address these gaps and improve the efficacy of human-agent collaboration in educational settings.


6. Towards developing a guideline for optimizing interface design of Intelligent Tutoring Software

Shintaro Sato, Qingzhi Zhang, Man Su and Tomohiro Nagashima

Abstract: Intelligent Tutoring Systems (ITSs) have seen an increased adoption in various teaching and learning settings, yet research on their user interface design remains limited, particularly in addressing multi-dimensional design aspects and their impact on student motivation. We developed a multi-dimensional guideline for designing optimal interface for supporting learning with intelligent tutors, covering the following four aspects: visibility, learnability, efficiency, and visual presentation. The guideline, developed based on existing frameworks and guidelines, is designed to address students’ cognitive learning needs and preferences. We used the guideline to re-design the user interface of an ITS for elementary mathematics and conducted a mixed-method, within-subject study with 4th-8th graders, comparing the original and re-designed tutor versions. Results demonstrated significant improvements in student motivation in favor of the re-designed version. This work offers a comprehensive design guideline that ITS designers could use to create learning experiences that are both pedagogically effective and motivationally engaging.


7. Students’ use of digital technologies to support emotion regulation when learning online

Jake Hilliard, Karen Kear and Helen Donelan

Effective emotion regulation is essential for successful learning experiences in higher education. However, limited research has examined how university students manage their emotions in online learning settings. In particular, the role of digital technologies in supporting online students’ emotion regulation remains largely underexplored. This paper addresses this gap by investigating how undergraduate students use digital technologies to regulate their emotions when studying online. Data was gathered from survey responses of 92 undergraduate students in the Science, Technology, Engineering, and Mathematics faculty at The Open University UK. Specifically, three survey questions explored the frequency of digital technology use for emotion regulation, the specific tools employed, and examples of how these technologies support emotion regulation during online study. Findings indicate that many students actively use digital technologies to manage their emotions while studying online. Moreover, a diverse range of digital technologies are utilised, including music and video streaming platforms, online games, meditation apps, and social media. This paper provides an initial exploration of digital emotion regulation among online learners, offering valuable insights for educators and learning designers aiming to create more emotionally supportive online learning environments.


8. Designing the Course Load Analytics Platform

Conrad Borchers, Shreya Sheel, Anirudh Pai, Sher Shah and Zachary A. Pardos

Abstract: Increasing empirical evidence suggests that credit units are an insufficient proxy for student workload in higher education. Course load analytics (CLA) could support course selection and academic advising by offering a more accurate prediction of course load based on data from a learning management system and historical enrollments. We describe the development of a CLA platform for academic advising. The CLA platform surpasses time-bound credit hour metrics by predicting cognitive demand and psychological stress associated with courses while identifying workload spikes throughout the semester on a weekly basis. We describe how the platform enables students and advisors to plan semesters using a course catalog tool, allowing them to explore alternative semester workload scenarios. We contribute generalizable knowledge and procedures for instrumenting similar platforms that support students in managing and preparing for their academic course workload. We also contribute open-source code for researchers and practitioners to adopt and deploy our CLA platform.


9. Investigating the Effects of Motivational Pedagogical Agents on Student Learning and Choice Making in an Adaptive Learning System

Man Su, Katharina Bonaventura, Shintaro Sato and Tomohiro Nagashima

Abstract: The use of pedagogical agents (PAs) in interactive learning environments holds promise for improving learning outcomes, yet their impact on strategic choice making remains underexplored. This study investigated how motivational pedagogical agents influence students’ conceptual learning and choice-making behaviors, such as whether to engage in metacognitive tasks that are optional, when using an adaptive algebra learning technology. In an experiment, 49 high school students were assigned to either an Agent condition, featuring motivational prompts delivered by PAs, or a Non-Agent condition without PAs nor prompts. Although the motivational prompts by PAs did not significantly improve overall learning gains or reduce error rates compared to the Non-Agent condition, younger students (Grade 9) demonstrated modestly higher improvements, suggesting differential benefits based on students’ prior knowledge. Additionally, older students (Grade 10) in the Agent condition showed higher engagement with optional tasks over time. These findings highlight the potential of motivational PAs to support strategic choice-making and indicate the need for future adaptive systems to offer context-sensitive prompts that guide learners in making productive choices aligned with their learning goals and self-regulation needs.


10. A Systematic Review of Integrating Knowledge Graphs with Large Language Models: Applications, Models, Evaluation Methods, and Opportunities

Wenting Sun

Abstract: Generative Artificial Intelligence (GenAI), particularly large language models (LLMs) like ChatGPT, has the potential to scale personalized feedback and reduce the workload of teaching and instruction. However, GenAI faces challenges in educational applications, such as generating outputs with hallucinations, lacking explainability in reasoning, and producing lengthy responses. Additionally, GenAI often fails to meet education-related standards, such as curriculum requirements and learner-related data, making it less con-text-sensitive in authentic courses. Knowledge Graphs (KGs), characterized by their structured representation of entities, relations, and attributes, can provide consistent answers and hierar-chical reasoning about information clusters. The integration of KGs with LLMs shows promise in linking graduate requirements and knowledge points, supporting interdisciplinary knowledge theme design in STEAM projects. Despite these potentials, there is a lack of comprehensive reviews on how KGs combined with LLMs can enrich learning experiences and empower educators in the dynamic landscape of modern education. Following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, this systematic review examines selected studies to highlight specific KG functionalities, the role of LLMs in knowledge extraction, data resources, LLMs used, and evaluation methods. The review contributes to three key areas: applications of KGs combined with LLMs in education, a data workflow from the data source to evaluation in the application context, and opportunities for KGs combined with LLMs to support lifelong learning and reduce educational inequality utilizing open resources.


11. Assisting Teachers in the Design of Feedback for Online Learning Using Large Language Models: A Theory-Driven Approach

Paraskevi Topali, Alejandro Ortega-Arranz, Miguel L. Bote-Lorenzo and Juan I. Asensio-Perez

Abstract: This poster paper explores the extent to which Large Language Models (LLMs) might help teachers of online courses design feedback interventions based on their learning designs. This implies (a) anticipating learners’ problems within the learning design tasks, (b) selecting data-driven indicators based on learners’ digital traces that might potentially help the detection of those problems during enactment, and (c) deciding the type and timing of feedback intervention to help learners overcome their problems. Using real course learning designs as input to the LLMs, together with predefined catalogs of problems, indicators, and feedback reactions grounded in existing feedback theories, our research showcases how ChatGPT-o1 is capable of suggesting feedback design decisions that are meaningful to human instructional designers. The presented results suggest that state-of-the-art LLMs, when asked to base their answers on a theory-driven design space, may play a significant role in assisting novice teachers and instructional designers in incorporating pedagogically sound feedback interventions in their online courses.


12. Designing a study to evaluate the Impact of WebXR with GenAI on Student Engagement in Distance Education

Kamran Mir, Geraldine Gray, Ana Schalk and Muhammad Zafar Iqbal

Abstract: This study aims to evaluate how immersive content can enhance student engagement, learning outcomes, and satisfaction in an open and distance learning context. The paper gives an overview of the background research study design that investigates the impact of immersive virtual learning environments on distance education by integrating WebXR with generative AI into Moodle, a widely used virtual learning environment (VLE). The immersive learning environment in this study will be specifically designed and developed for a module taught at undergraduate level and integrated into the Moodle VLE. The design considers two groups of learners, an experimental group and a control group. The experimental group will engage with the course material through the immersive virtual learning environment, utilizing WebXR and Generative AI agents to enhance interactivity and engagement. In contrast, the control group will experience the same course content delivered through conventional online teaching methods using Moodle and MS Teams. Using Learning Analytics (LA) and the DeLone and McLean Information Systems Success Model as an evaluation framework, the research will assess a number of dimensions including system quality, information quality, service quality, user satisfaction, intention to use, net benefit, engagement and learning gain. A mixed-methods approach will be employed to collect feedback from distance learners through an online survey. This data will be combined with log data from the VLE and WebXR. Findings are expected to provide insights into the effectiveness of WebXR in enhancing the online learning experience and will offer practical recommendations for integrating Extended Reality (XR) tools into existing LMS platforms to support innovative and engaging education in distance learning contexts.


13. Exploring the assistive role of AI in assessment, in a meaningful way

Catarina Lelis

Abstract: This paper explores an innovative assessment approach designed for mas-ter-level students enrolled in a Communication and Technologies program, and it addresses the challenges posed by the use of generative AI tools in higher education settings. Rather than prohibiting the use of AI, we devel-oped “”Devil’s Advocates Wear AI”” (DAwAI), a novel assessment method that deliberately incorporates AI in an assistive role while ensuring authen-tic learning. DAwAI adapts the “”think-pair-share”” pedagogical strategy into a semester-long exercise where students first independently develop reflec-tive learning journals on course topics, imagining positive future applica-tions in their careers. Subsequently, each student anonymously critiques the work of one peer, supported by AI tools which help them identify un-considered perspectives, whilst having to document their prompting strate-gies and AI interactions. The approach concludes with revealing identities and sharing all reflections with the class. This methodology promotes con-tinuous engagement with course content, develops critical thinking and fu-tures literacy, teaches responsible AI utilization, and reduces opportunities for academic misconduct by emphasising personal reflection. Preliminary observations indicate strong student engagement, particularly among tech-nically inclined participants. The assessment demonstrates how AI can serve as a meaningful dialogue partner in education, transforming evalua-tion into an integrated learning opportunity rather than merely an end-point measurement. Future research will compare experiences across different student populations and analyse the complementary use of visual elements observed in submissions.


14. Generating AI Images for the OER Conversion Tool “Anonymous”

Lubna Ali, Zongxin Liu, René Röpke and Ulrik Schroeder

Abstract: Open Educational Resources (OER) are freely accessible educational materials that promote global learning, with images being one of the most commonly used types of resources. The “”Anonymous”” tool automates the conversion of educational materials containing images, such as text and presentation documents, into OER format. It replaces copyrighted images with CC-licensed alternatives. However, finding exact image matches can be challenging, especially when specific images such as scientific diagrams, and historical illustrations are required. To address this limitation, generative AI is incorporated into “”Anonymous””, allowing educators to create custom images tailored to their specific needs. This paper examines the integration of generative AI into “”Anonymous,”” focusing on model selection, licensing considerations, integration challenges, and potential research directions. Addressing these issues aims to empower educators to produce innovative, high-quality OER materials, fostering inclusivity and effectiveness in education.


15. Scaffolding Classroom Learning Scenarios with Generative AI: Socializing a Socratic Chatbot with Researchers and Practitioners

Isabel Hilliger, Mar Pérez-Sanagustín, Esteban Villalobos, Rafael Ferreira Mello and Carlos González

Abstract:In recent years, the emergence of generative AI-based tools has become a prominent focus in both research and educational practice. Many studies have explored how chatbots based on generative AI support teaching and learning, particularly in facilitating student independent study and assisting with feedback provision and assessment scoring. However, there is a gap in understanding how these agents support teaching and learning within the classroom, beyond well-known applications such as writing scaffolding. To illustrate the potential of AI-based chatbots in higher education, this poster paper describes the design and implementation of a Socratic tutor chatbot. Developed through a design-based research approach, this chatbot evolved into a Streamlit web-based application. Unlike ChatGPT, which starts from an empty prompt, our Socratic tutor begins with a learning scenario defined by the instructor, guiding student learning through structured questioning. Initially conceived as a text messaging tool, the chatbot has undergone four iterations and has been implemented in 10 subjects, involving a total of 277 students. The chatbot leverages GPT-4 Turbo and course materials, such as syllabi and readings, to support various learning activities. Additionally, the platform provides real-time analytics of student interactions, enabling instructors to monitor and adjust their teaching strategies promptly. This paper presents findings from a workshop with 44 participants, including professors, researchers, and managers. Out of the 44, 20 voluntarily answered an online survey, and eight reported learning scenarios in which they would use the tool in their institutions. Data reveals that the tool is perceived as easy to use and effective in scaffolding classroom teaching and learning activities. Notably, many reported scenarios focus on developing AI literacy skills among students, highlighting a promising direction for future research.


16. Error Classification in Stoichiometry Tutoring Systems with Different Levels of Scaffolding: Comparing Rule-Based Classification and Machine Learning

Hendrik Fleischer, Conrad Borchers, Sascha Schanze and Vincent Aleven

Abstract: Comparing rule-based and machine learning (ML) approaches to error classification is crucial for advancing adaptive instruction. However, few studies have examined their comparative accuracy for tutoring systems with different levels of scaffolding. The present study addresses this gap by examining the classification of stoichiometry errors using data from 61 science students enrolled at a public German university who interacted with two distinct tutoring systems. We annotated 1,164 error clips from log data and derived an error classification scheme with eight categories covering system-related (e.g., usability) and domain-specific (e.g., unit conversion) categories. We developed decision rules and trained an ML model, comparing automatically classified errors in segments of learner inputs to classifications based on our expert model. Our results indicate that domain-specific errors requiring procedural knowledge are more accurately classified by the rule-based classifier, while concept-based errors are better captured by ML, though only in a lowly scaffolded tutoring system. These findings suggest researchers must carefully choose modeling approaches to address misconceptions in STEM learning.


17. Leveraging LLMs to Assess Tutor Moves in Real-Life Dialogues: A Feasibility Study

Danielle R. Thomas, Conrad Borchers, Jionghao Lin, Sanjit Kakarla, Shambhavi Bhushan, Erin Gatz, Shivang Gupta, Ralph Abboud and Kenneth R. Koedinger

Abstract: Tutoring improves student achievement, but identifying and studying what tutoring actions are most associated with student learning at scale based on audio transcriptions is an open research problem. This present study investigates the feasibility and scalability of using generative AI to identify and evaluate specific tutor moves in real-life math tutoring. We analyze 50 randomly selected transcripts of collegestudent remote tutors assisting middle school students in mathematics. Using GPT-4, GPT-4o, GPT-4-turbo, Gemini-1.5-pro, and LearnLM, we assess tutors’ application of two tutor skills: delivering effective praise and responding to student math errors. All models reliably detected relevant situations, for example, tutors providing praise to students (94– 98% accuracy) and a student making a math error (82–88% accuracy) and effectively evaluated the tutors’ adherence to tutoring best practices, aligning closely with human judgments (83–89% and 73–77%, respectively). We propose a cost-effective prompting strategy and discuss practical implications for using large language models to support scalable assessment in authentic settings. This work further contributes LLM prompts to support reproducibility and research in AI-supported learning.


18. Can Large Language Models Identify Locations Better Than Linked Open Data for U-Learning?

Pablo García-Zarza, Juan I. Asensio-Pérez, Miguel L. Bote-Lorenzo, Luis F. Sánchez-Turrión, Davide Taibi and Guillermo Vega-Gorgojo

Abstract: Many ubiquitous learning (u-learning) applications heavily rely on the accurate retrieval of points of interest (POIs) in a geographical area, as it is at these locations that learning activities are proposed to students. In previous work, semantic technologies have been successfully employed to retrieve such POIs from Linked Open Data (LOD) datasets. However, the recent advancements in Large Language Models (LLMs), and their improved performance in processing factual data, arises as to whether u-learning applications could rely on LLMs to obtain exhaustive lists of POIs in a given geographical area. This poster paper provides empirical evidence about the current limitations of LLMs when carrying out this task in comparison with the capabilities of LOD datasets. More specifically, we compare the capabilities of a LOD dataset (Wikidata) and two LLMs (ChatGPT-o1 and DeekSeek-R1) for providing exhaustive lists of cultural heritage sites of three European cities and regions. Our results suggest that currently available LOD semantic datasets can complement state-of-the-art LLMs in terms of accuracy, completeness, consistency, and validity when gathering POIs for the design of u-learning situations.


19. Classifying Students’ Meta-Cognitive Comments

Christian Hoffmann, Madou Koné, Nassim Bouarour and Sihem Amer-Yahia

Abstract: We report progress on automatically classifying written comments that students provide after receiving their performance on knowledge tests based on closed-ended questions with confidence levels. This classification allows teachers to effectively analyze those comments and helps them better understand students’ difficulties and foster the development of meta-cognitive skills. We describe a classification pipeline that seamlessly integrates large or small language models (LLMs or SLMs), leveraging state-of-the-art retrieval augmented generation, and human feedback. We apply our approach to field data from high school physics tests and to a classification scheme derived from a model for self-regulated learning. The best classification accuracies achieved for SLMs are of the order of 0.8, which is comparable to what can be obtained with LLMs. The classification obtained indicates that students in similar classroom contexts have very different perceptions and levels of analysis of their performance on assessments. While some focus solely on the factual interpretation of their quantitative results, others comment on their level of confidence, self-efficacy and learning strategies.


20. Assessing the effect of performance prediction on students perceptions of courses

Sebastián Luarte, Julio Guerra and Eliana Scheihing

Abstract: While Learning Analytics Dashboards (LADs) gain relevance in higher education for supporting student self-regulation, evaluation of such tools remain limited. This study examines the impact of adding course-level predictions to a program-completion LAD on students’ perceptions of course difficulty, readiness, workload, and anxiety. A controlled study with 39 undergraduate engineering students indicates that predictive features significantly reduces the perceived course difficulty when the predicted probability of passing is 50% or higher. Furthermore, there is evidence suggesting that predictions below 50% were associated with increased workload estimates, while those above 50% corresponded to reduced anxiety. This study contributes to the design of predictive analytics that support informed student decision making in course enrollment.


21. CodeFarm: An application for the development of computational thinking through video games in early educational stages

Gema Jiménez-González, Estefania Martín-Barroso and María Zapata-Cáceres

Abstract: Computational thinking (CT), as defined by Wing, involves mental processes for formulating problems executable by an information-processing agent. Introducing CT in early childhood supports not only cognitive skills like pattern recognition and algorithmic thinking, but also teamwork and communication. Given the widespread presence of video games in children’s lives, this study proposed a tablet-based application using mini-games to teach CT skills in an engaging and educational manner. The app was tested through direct observation with children aged 5 and 6, who responded positively and interacted easily with the games. While most mini-games were well-received, two were identified as needing improvement due to their high cognitive demands. The children found it easy to interact with the application and enjoyed engaging in its activities.


22. Are MOOCs truly accessible? Insights into motivations, attitudes and challenges of non-native English participants

Pauline Jadoulle, Tanguy Dubois, Pauline Degrave and Magali Paquot

Abstract: Massive Open Online Courses (MOOCs) have been envisioned as a transformative force in global education, offering free and flexible learning opportunities to individuals from diverse socioeconomic and geographic backgrounds. However, despite their promise of accessibility, approximately 75% of MOOCs are in English, posing challenges for non-native speakers. These learners often struggle with comprehension, particularly when video content lacks visual aids. Understanding the experience of these participants is crucial to improving the inclusivity and effectiveness of MOOCs in English. This poster presents a pilot study on non-native speakers’ motivations, attitudes, and challenges in English-taught MOOCs using a survey-based approach. The survey includes 5-point Likert-scale (e.g. “I take MOOCs in English to improve my English skills”) and open-ended questions (e.g. “Are there other reasons for you to take MOOCs in English?”), and focuses on non-native English speakers enrolled in an English-taught MOOC over the past 12 months, ensuring a balanced distribution across proficiency levels (B1, B2, C1, C2). Data collection was conducted via the crowdsourcing platform Prolific, facilitating participant recruitment and enabling us to pay the participants. A total of 87 participants took part in the study. To ensure validity, the survey underwent Exploratory Factor Analysis and Cronbach’s alpha reliability tests. Findings highlight the unavailability of similar courses in other languages as a key driver, pointing to a gap in non-English online educational resources. In terms of attitudes, participants perceive MOOCs in English as having equivalent or higher quality and prestige compared to those in other languages, and they exhibit particularly high English proficiency self-efficacy. Reported challenges are minimal, with few difficulties in understanding or expressing themselves in English and little reliance on external language support tools (e.g., dictionaries, Google Translate, DeepL, ChatGPT). Ultimately, this research aims to inform strategies for making MOOCs more accessible and supportive for non-native English speakers worldwide.


23. Are Students’ Perceptions of Auto-grader Effectiveness Biased by Their Grades?

Yufan Zhang, Jaromir Savelka, Heather Burte, Christopher Bogart, Seth Goldstein and Majd Sakr

Abstract: Auto-graders are increasingly relied upon in CS education, particularly in large or online courses, yet evaluating their effectiveness remains difficult due to variations in learning contexts and implementation. In this study, we evaluate an auto-grader used in a graduate-level cloud computing course by analyzing student ratings of its effectiveness across seven projects, and examining whether those ratings are potentially biased by students’ grades or by their utilization of the auto-grader’s feedback. Our dataset includes 1,163 students and over 2,200 survey responses collected across four semesters. We find that 96% of responses rate the auto-grader as either “effective” or “somewhat effective,” and that many students change their rating from one project to the next, suggesting authenticity in their evaluations. While we observe a statistically significant association between higher ratings and higher grades, the correlation is weak ($\rho=0.19$), likely due to low variance in ratings and high variance in grades. We find no evidence of correlation between ratings and feedback utilization ($\rho=0.07$), indicating that perceived effectiveness is not clearly driven by usage behavior. Our findings suggest that student perceptions of auto-graders can serve as a meaningful, if limited, tool for evaluating them, and we encourage educators to incorporate student feedback into future assessments of auto-grader design and implementation.


24. Transparent Risk Predictions and Explanatory Feedback: Boosting Engagement and Course Achievement in Online Professional Learning

Mouaici Mohamed

Abstract: This paper investigates the impact of real-time risk of failure predictions, accompanied by LIME-based textual explanations, on learner engagement and course achievement in an online professional learning context. The study involves 240 learners enrolled in 24 courses, divided into three groups: a control group receiving no predictions, a prediction-only group, and a treatment group receiving both risk predictions and textual explanations via a learning analytics dashboard. Engagement is measured using four behavioral indicators derived from LMS digital traces, calculated both before and after predictions. The results reveal that the prediction-only group exhibits an engagement increase of approximately 12–15%, while the treatment group sees a further improvement of 18–22% compared to the control group. Notably, 69% of learners in the treatment group rate the predictions and textual explanations as useful. This group also records higher final quiz scores and course completion rates, indicating enhanced course achievement. Mixed-design ANOVA confirms the statistical significance of these findings, underscoring the importance of interpretable and transparent real-time predictions in supporting learner engagement and course achievement.


25. Exploring the Complex Analytics Interplay of LMS Design, Usage, Academic Outcomes, and Perceived Workload: A Case Study

Ariel Ortiz-Beltrán, Francielle Marques and Davinia Hernández-Leo

Abstract: This paper presents a comprehensive analytical case study of 53 undergraduate courses at a brick-and-mortar Spanish university, examining how Learning Management System (LMS) engagement—defined by LMS design based on the number of unique activities and resources made available by the teachers (opportunities for engagement or intended engagement), course interaction frequency, and daily access fre- quency (actual engagement)—relates to academic performance and per- ceived workload. Using descriptive statistics and a Generalized Additive Model, we uncover threshold effects: moderate engagement maximizes both course grades and workload satisfaction, whereas very low or very high activity levels reduce the benefits. Rather than asserting that “engagement does not guarantee success,” our findings surface generative uncertainties, indicating that there are cases where a relationship warrants further investigation. The proposed three-indicator toolkit offers a replicable, course-level method for identifying the trends for optimal balance of the considered indicators within an institution, guiding actions for evidence-informed pedagogical adjustments and further inquiry.