Paper Session 5: Assessment and Evaluation

EC-TEL 2021 Bozen/Bolzano
<< Return to Programme Paper Session 5: Assessment and Evaluation Chair: Tinne De Laet 11:00-11:30 CET Recommendations for Orchestration of Formative Assessment Sequences: a Data-driven Approach ★ Best paper candidate Rialy Andriamiseza[1], Franck Silvestre[1], Jean-François Parmentier[2] and Julien Broisin[1] [1] Université de Toulouse, France [2] Toulouse INP, France Abstract: Formative

Speakers

neutral portrait picture
Rialy Andriamiseza
Université de Toulouse, France
neutral portrait picture
Fahima Djelil
IMT Atlantique, Lab-STICC, France
Thomas Sergent
Thomas Sergent
CNRS, Sorbonne Université, France
Lalilo, France

Start

September 23, 2021 - 11:00

End

September 23, 2021 - 12:30

Address

Zoom Room 1 @ EC-TEL Gather Town   View map

<< Return to Programme

Paper Session 5: Assessment and Evaluation

Chair: Tinne De Laet

11:00-11:30 CET
Recommendations for Orchestration of Formative Assessment Sequences: a Data-driven Approach

★ Best paper candidate

Rialy Andriamiseza[1], Franck Silvestre[1], Jean-François Parmentier[2] and Julien Broisin[1]
[1] Université de Toulouse, France
[2] Toulouse INP, France

Abstract: Formative assessment aims to improve teaching and learning by providing teachers and students with feedback designed to help them to adapt their behavior. To face the increasing number of students in higher education and support this kind of activity, technology-enhanced formative assessment tools emerged. These tools generate data that can serve as a basis for improving the processes and services they provide. Based on literature and using a dataset gathered from the use of a formative assessment tool in higher education whose process, inspired by Mazur’s Peer Instruction, consists in asking learners to answer a question before and after a confrontation with peers, we use learning analytics to provide evidence-based knowledge about formative assessment practices. Our results suggest that: (1) Benefits of formative assessment sequences increase when the proportion of correct answers is close to 50% during the first vote; (2) Benefits of formative assessment sequences increase when correct learners’ rationales are better rated than incorrect learners’ ones; (3) Peer ratings are consistent when correct learners are more confident than incorrect ones; (4) Self-rating is inconsistent in peer rating context; (5) The amount of peer ratings makes no significant difference in terms of sequences benefits. Based on these results, recommendations in formative assessment are discussed and a data-informed formative assessment process is inferred.

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-030-86436-1_19


11:30-12:00 CET
Analysing Peer Assessment Interactions and Their Temporal Dynamics Using a Graphlet-Based Method

Fahima Djelil[1], Laurent Brisson[1], Raphaël Charbey[2], Cecile Bothorel[1], Jean-Marie Gilliot[1] and Philippe Ruffieux[3]
[1] IMT Atlantique, France
[2] Energiency, France
[3] HEP Vaud, Switzerland

Abstract: Engaging students in peer assessment is an innovative assessment process which has a positive impact on students learning experience. However, the adoption of peer assessment can be slow and uncomfortably experienced by students. Moreover, peer assessment can be prone to several biases. In this paper, we argue that the analysis of peer assessment interactions and phenomena can benefit from the social network analysis domain. We applied a graphlet-based method to a dataset collected during in-class courses integrating a peer assessment platform. This allowed for the interpretation of networking structures shaping the peer assessment interactions, leading for the description of consequent peer assessment roles and their temporal dynamics. Results showed that students develop a positive tendency towards adopting the peer assessment process, and engage gradually with well-balanced roles, even though, initially they choose mostly to be assessed by teachers and more likely by peers they know. This study contributes to research insights into peer assessment learning analytics, and motivates future work to scaffold peer learning in similar contexts

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-030-86436-1_7


12:00-12:30 CET
Using Prompts and Remediation to Improve Primary School Students Self-Evaluation and Self-Efficacy in a Literacy Web Application

★ Best paper candidate

Thomas Sergent[1,2], Morgane Daniel[1], François Bouchet[2] and Thibault Carron[1]
[1] Sorbonne Université, France
[2] Lalilo, France

Abstract: Self-regulation skills are critical for students of all ages in order to maximize their learning. A key aspect of self-regulation is being aware of one’s performance and deficits in self-evaluation. Additionally, a clear consensus has not been reached regarding the age one can start learning these self-regulation processes. In order to investigate the possibility to raise awareness to some self-regulation deficits in 5 to 8 years old children, we have introduced two prompts triggered randomly after 1 out of 15 exercises into a literacy web-application for primary school students, to evaluate perceived difficulty [Too easy, Good, Too difficult] and desired difficulty [easier, same level, harder]. Comparing students’ actual performance with their responses to self-regulatory prompts can provide information about their ability to self-regulate their learning, in particular in terms of self-evaluation and self-efficacy. We collected 2,600,142 responses from 467,116 students for our experiments. The goal of this paper is to assess the impact of two different remediation strategies to reduce the two types of deficits initially measured in students. In a first study, we measured the impact of a gauge (resp. an audio recording) showing (resp. telling) the number of correct and incorrect answers to help students evaluate their actual performance during answers to the self-regulation prompts. In a second study, we measured the impact of giving self-evaluation and self-efficacy remediation to students who showed a deficit in self-regulated learning abilities from their answers to the self-regulation prompts. The results show (a) a significant reduction of self-evaluation deficits when answers were supported by a visual gauge, (b) no significant impact on self-evaluation deficits when answers were supported by an audio recording, (c) a significant reduction of future self-evaluation deficits when giving students audio feedback advising them not to repeat a detected deficit. This underlines the possibility of scaffolding self-regulated learning skills in a web based application from a young age while learning another skill.

📄 Read More: https://link.springer.com/chapter/10.1007/978-3-030-86436-1_17