Session 4: Multimodal Learning Analytics

ECTEL logo
<< Return to Programme Session 4: Multimodal Learning Analytics Chair: Luis P. Prieto-Santos 11:00-11:30 WEST Role of Multimodal Systems in Computer-Assisted Learning: A Scoping Review Yoon Lee, Bibeg Limbu, Zoltan Rusak and Marcus Specht Abstract: Computer-assisted learning systems, more specifically multimodal learning technologies, use sensors to collect data from multiple

Speakers

neutral portrait picture
Yoon Lee
TU Delft, The Netherlands
neutral portrait picture
Qi Zhou
University College London, UK
neutral portrait picture
Pankaj Chejara
Tallinn University, Estonia

Start

07/09/2023 - 11:00

End

07/09/2023 - 13:00

Address

Auditorium   View map

<< Return to Programme

Session 4: Multimodal Learning Analytics

Chair: Luis P. Prieto-Santos

11:00-11:30 WEST
Role of Multimodal Systems in Computer-Assisted Learning: A Scoping Review

Yoon Lee, Bibeg Limbu, Zoltan Rusak and Marcus Specht

Abstract: Computer-assisted learning systems, more specifically multimodal learning technologies, use sensors to collect data from multiple modalities to provide personalized learning support beyond traditional learning settings. However, many studies surrounding such multimodal learning systems mostly focus on technical aspects concerning data collection and exploitation and therefore overlook theoretical and instructional design aspects such as feedback design in multimodal settings. This paper explores multimodal learning systems as computer-assisted learning systems used for capturing and analyzing the learning process to exploit the collected multimodal data to generate feedback in multimodal settings. By investigating various studies, we aim to reveal the roles of multimodality in computer-assisted learning systems across various learning domains. Our scoping review outlines the conceptual landscape of multimodal learning systems, identifies potential gaps, and provides new perspectives on adaptive multimodal systems design: intertwining learning data for meaningful insights into learning, designing effective feedback, and implementing them in diverse learning domains.

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_12


11:30-12:00 WEST
Automated Detection of Students’ Gaze Interactions in Collaborative Learning Videos: A Novel Approach

Qi Zhou, Amartya Bhattacharya, Wannapon Suraworachet, Hajime Nagahara and Mutlu Cukurova

Abstract: Gaze behaviours have been considered important social signals to explore human learning. Over the past decade, previous research showed positive relationships between certain features of gaze behaviours and the quality of collaborative learning. However, most studies focus on detecting students’ gaze behaviours with eye-tracking tools which are costly, logistically challenging, and can be obtrusive in real-world physical collaboration spaces. This study presents a novel approach to detecting students’ gaze behaviours from videos of real-world collaborative learning activities. Pre-trained computer vision models were used to detect objects on the scenes, students’ faces, and their gaze directions. Then, a rule-based approach was applied to detect gaze behaviours that are associated with peer communication and resource management aspects of collaborative learning. In order to test the accuracy of the proposed approach, twenty collaborative learning sessions, each lasting from 33 minutes to 67 minutes, from five groups in a 10-week-long higher education course were analysed. The results showed that the proposed approach achieves 66.57% overall accuracy at automatically detecting students’ gaze interactions in collaborative learning videos. The implications of these findings for supporting students’ collaborative learning in real-world technology-enhanced learning environments are discussed.

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_34


12:00-12:30 WEST
Exploring Indicators for Collaboration Quality and Its Dimensions in Classroom Settings Using Multimodal Learning Analytics

Pankaj Chejara, Luis P. Prieto, María Jesús Rodríguez-Triana, Adolfo Ruiz Calleja, Reet Kasepalu, Irene-Angelica Chounta and Bertrand Schneider

Abstract: Multimodal Learning Analytics researchers have explored relationships between collaboration quality and multimodal data. However, the current state-of-art research works have scarcely investigated authentic settings and seldom used video data that can offer rich behavioral information. In this paper, we present our findings on potential indicators for collaboration quality and its underlying dimensions such as argumentation, and mutual understanding. We collected multimodal data (namely, video and logs) from 4 Estonian classrooms during authentic computer-supported collaborative learning activities. Our results show that vertical head movement (looking up and down) and mouth region features could be used as potential indicators for collaboration quality and its aforementioned dimensions. Also, our results from clustering provide indications of the potential of video data for identifying different levels of collaboration quality (e.g., high, low, medium). The findings have implications for building collaboration quality monitoring and guiding systems for authentic classroom settings.

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_5