Speakers
Stanislav Pozdniakov
Monansh University, AustraliaEsteban Villalobos
Université Toulouse III, FranceAditya Joshi
Utrecht University, The NetherlandsMohamed Mouaici
Logipro company, FranceStart
06/09/2023 - 10:00
End
06/09/2023 - 13:00
Address
Room 40.2.15 View mapSession 3: Student Support with Learning Analytics
Chair: Oleksandra Poquet
Single or Multi-page Learning Analytics Dashboards? Relationships between Teachers’ Cognitive Load and Visualisation Literacy
Stanislav Pozdniakov, Roberto Martinez-Maldonado, Yi-Shan Tsai, Namrata Srivastava, Yuchen Liu and Dragan Gasevic
Abstract: There has been a proliferation of learning analytics (LA) interfaces designed to support teachers, such as LA dashboards. However, although teacher dashboards have been extensively studied, there is limited understanding of the relationship between single-page or multi-page dashboard designs and the cognitive demands placed on teachers to comprehend them. Additionally, teachers typically possess varying levels of visualisation literacy (VL), which may make it easier or more difficult for them to engage with single-page versus multi-page dashboard designs. In this paper, we explore how teachers with varying VL use single-page and multi-page LA dashboards. We conducted a quasi-experimental study with 23 higher education teachers of varied VL inspecting single and multi-page LA dashboards. We used an eye-tracking device to measure cognitive load while teachers inspected the LA dashboards about online group work. We investigated how proxy metrics derived from eye-tracking data related to teachers’ cognitive load varied depending on the type of the dashboard teacher used and the level of VL teachers have. Our findings suggest that the design of the LA dashboard had an impact on the cognitive load experienced by the teachers. Post-hoc analysis revealed that teachers with low VL had marginally lower cognitive load when using single-page dashboards. We argue that LA dashboard design for teachers should account for teachers’ levels of VL and provide recommendations for design.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_23
Analyzing Learners’ Perception of Indicators in Student-Facing Analytics: A Card Sorting Approach
Esteban Villalobos, Isabel Hilliger, Mar Perez-Sanagustin, Carlos Gonzalez, Sergio Celis and Julien Broisin
Abstract: In recent years, many studies have explored using different indicators to support students’ self-monitoring. This has motivated the development of various student-facing analytics, such as dashboards and chatbots. However, there is a limited understanding of how learners interpret these indicators and act on that information. In this study, we evaluate different indicators from a student perspective by adapting the card sorting technique, which is employed mainly in Human-Centered Design. We chose eight indicators based on different comparative reference frames from the literature to create 16 cards that we used to present both a visual and a text representation per indicator. Qualitative and quantitative data were collected from 21 students of three majors at two Latin American universities. According to the quantitative results, the level of agreement across students about the understandability and actionability of the indicators was relatively low. Nonetheless, the indicators that included temporality were found to be less interpretable but more actionable than those that did not. The qualitative analysis indicates that several students would use this information to improve their study habits only if their performance in the course is lower than expected. These findings might be used as a starting point to design student-facing analytics. Also, the adaptation of the card sorting technique could be replicated to understand learners’ use of indicators in other TEL contexts.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_29
Student Perception of Social Comparison in Online Learning – An Exploratory Study
Aditya Joshi, Bente Molenkamp and Sergey Sosnovsky
Abstract: People often compare themselves with their peers in various contexts, including education. Utilisation of this general tendency becomes an effective strategy to enhance students’ motivation. Several successful implementations of social comparison in educational applications have demonstrated positive outcomes. Yet, they largely disregard the fact people differ in how they react to social comparison, and how they process and interact with social information. In this study, we have built and evaluated a set of interface prototypes visualising different types of social comparison based on the directions and distance of comparison, as well as privacy policy and social space. We found that not all the students have similar social comparison preferences. On some questions, students in a low-performance scenario have provided significantly different answers compared to students who are at the top of the class. We also compared students with different Achievement Goal Orientations and discovered that it can also be a significant factor in several situations. These results provide us with insights on how the effect of Social Comparison can differ based on individual differences among students. These insights can potentially lead to creating adaptive social comparison interfaces that support and motivate students in a more individually-optimized manner.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_9
Early Prediction of Learners At-risk of Failure in Online Professional Training Using a Weighted Vote
Mouaici Mohamed
Abstract: Professional training involves the acquisition of knowledge, skills, and expertise required to perform specific job roles. It can take various forms, including classroom instruction, practical experience, and online training. While online training has the potential to reach a wider audience, a lack of interaction and support may lead to lower engagement and completion rates. In this paper, we propose a solution to predict, as early as possible, the risk of failure in online professional training. To achieve this, we use a dataset consisting of 13,719 observations, which includes 912 learners and 182 online courses taught in France via an LMS from 2017 to 2022. Our data analysis led us to suggest a risk of failure based on three categories: low risk, moderate risk, and high risk. The objective is to predict the category of each observation when the learner reaches the halfway point in each course. Initially, we tested nine predictive models, which revealed discrepancies in their results across the three categories. To address this gap, we propose a new solution that employs a weighted vote to improve the classifications. This solution applies the Borda principle to rank the nine models based on their predictive performance in each category. Then, the weight assigned to each model is calculated by considering both its rank and its F1 score in each risk of failure category. Finally, we apply a weighted vote principle involving the nine models to improve the classifications. On average, our solution improves the results by 1.2% for the three categories, as measured by the F1 score.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_17