Session 6: Intelligent Tutoring Systems

ECTEL logo
<< Return to Programme Session 6: Intelligent Tutoring Systems Chair: Zach Pardos 11:00-11:30 WEST Broader and Deeper: A Multi-Featured with Latent Relations BERT Knowledge Tracing Model Zhaoxing Li, Mark Jacobsen, Lei Shi, Yunzhan Zhou and Jindi Wang Abstract: Knowledge tracing aims to estimate students' knowledge state or skill mastering level


neutral portrait picture
Zhaoxing Li
Durham University, UK
Behzad Mirzababaei
Behzad Mirzababaei
Know-Center GmbH, Graz, Austria
neutral portrait picture
Robin Schmucker
Carnegie Mellon University, USA
neutral portrait picture
Conrad Borchers
Carnegie Mellon University, USA


07/09/2023 - 11:00


07/09/2023 - 13:00


Room 40.2.15   View map

<< Return to Programme

Session 6: Intelligent Tutoring Systems

Chair: Zach Pardos

11:00-11:30 WEST
Broader and Deeper: A Multi-Featured with Latent Relations BERT Knowledge Tracing Model

Zhaoxing Li, Mark Jacobsen, Lei Shi, Yunzhan Zhou and Jindi Wang

Abstract: Knowledge tracing aims to estimate students’ knowledge state or skill mastering level over time, which is evolving into an essential task in educational technology. Traditional knowledge tracing algorithms generally use one or a few features to predict students’ behaviour and do not consider the latent relations between these features, which could be limiting and disregarding important information in the features. In this paper, we propose MLFBK: A Multi-Features with Latent Relations BERT Knowledge Tracing model, which is a novel BERT based Knowledge Tracing approach that utilises multiple features and mines latent relations between features to improve the performance of the KT model. Specifically, our algorithm leverages four data features (student_id, skill_id, item_id, and response_id), as well as three meaningful latent relations among features to improve the performance: individual skill mastery, ability profile of students (learning transfer across skills), and problem difficulty. By incorporating these explicit features, latent relations, and the strength of the BERT model, we achieve higher accuracy and efficiency in knowledge tracing tasks. We use t-SNE as a visualisation tool to analyse different embedding strategies. Moreover, we conduct ablation studies and activation function evaluation to evaluate our model. Experimental results demonstrate that our algorithm outperforms baseline methods and demonstrates good interpretability.

📄 Read More:

11:30-12:00 WEST
Interactive web-based learning materials vs. tutorial chatbot: Differences in user experiences

Behzad Mirzababaei, Katharina Maitz, Angela Fessl and Viktoria Pammer-Schindler

Abstract: Today’s learning platforms make content available and enable social interaction between humans. Tomorrow, such platforms could also host computational tutors that support learning through dialogues. In this work, we explore how user experience differs between learning via web-based interactive content and learning within a dialogue led by a computational tutorial agent. To this purpose, we conducted a study with 31 master students of inclusive education. One group interacted with web-based textual learning materials (DIGIVIDget condition, n=14). The other group interacted with a tutorial agent (DIGIBOT condition, n=17). Both groups received the same text-based content on formulating search queries for the Internet. Using the standard System Usability Scale, the DIGIBOT’s usability and usefulness were rated as slightly higher than the DIGIVIDget’s, but not significantly (Mann-Whitney-U-Test, U=166.5, p=.06). Subsequently, two focus group discussions were carried out with participants who had also tested the respective other technology (n=12).
Focus group participants highlighted positively the interactive nature of DIGIBOT, the motivational effect of receiving immediate feedback, and that DIGIBOT requires and facilitates more concentration than DIGIVIDget.Freely navigable web-based content may be more suitable to give an overview over a large area of knowledge and allow punctual access to it. The tutorial and dialogic interaction is therefore particularly interesting when prior knowledge about the learning domain is small or for users with special needs, who especially benefit from good structure, interactivity to focus, and dialogic interaction as a further motivational factor.

📄 Read More:

12:00-12:30 WEST
Learning to give useful hints: Assistance action evaluation and policy improvements

Robin Schmucker, Nimish Pachapurkar, Shanmuga Bala, Miral Shah and Tom Mitchell

Abstract: We describe a fielded online tutoring system that learns which of several candidate assistance actions (e.g., one of multiple hints) to provide to students when they answer a practice question incorrectly, but before they make a second attempt. The system learns, from large-scale data of prior students, which assistance action to give for each of thousands of questions, to maximize measures of student learning outcomes. Using data from over 190,000 students in an online Biology course collected over a four-month period, we quantify the impact of different assistance actions for each question on a variety of outcomes (e.g., response correctness, practice completion), framing the learning task as a multi-armed bandit problem. We study the relationships among different measures of learning outcomes, leading us to design an algorithm for training an assistance policy that optimizes the student’s success at their second attempt answering the current question, as well as their overall performance for the current practice session. We evaluate the trained policy for providing assistance actions, comparing it to a randomized assistance policy in live use with over 20,000 students, showing significant improvements resulting from the system’s ability to learn to teach better based on observed data from earlier students.

📄 Read More:

12:30-13:00 WEST
What makes problem-solving practice effective? Comparing paper and AI tutoring

Conrad Borchers, Paulo F. Carvalho, Meng Xia, Pinyang Liu, Kenneth R. Koedinger and Vincent Aleven

Abstract: In numerous studies, intelligent tutoring systems (ITSs) have proven effective in helping students learn mathematics. Past theory posits that their effectiveness derives from efficiently providing eventually-correct practice opportunities. Yet, there is little empirical evidence on how they compare to other forms of instruction in this regard. The current study investigates these mechanisms by comparing problem-solving with an ITS versus solving the same problems on paper. We analyze the learning process and pre-post gain data from N = 97 middle school students practicing linear graphs. We find that (i) students working with the ITS had more than twice the number of eventually-correct practice opportunities than those working on paper in the same amount of time and (ii) students skipped more steps on paper on the unit with the largest tutor advantage. These findings align with tutoring allowing students to grapple with challenging steps through tutor assistance. Yet, contrary to our hypothesis, students could only partially convert these tutor practice advantages into learning gain advantages. We discuss how this finding could be explained by increased gaming invoked by the tutor’s menus, which was significantly negatively associated with learning gains along with step skipping on paper. This study provides first-of-its-kind quantitative evidence into when and how scaffolding afforded by ITSs yields more learning opportunities and faster learning compared to equivalent paper-and-pencil practice.

📄 Read More: