<< Return to Programme
Session 7: Responsible LA & AI
Chair: Rebecca Ferguson
A critical consideration of the ethical implications in learning analytics as data ecology
Paul Prinsloo, Mohammad Khalil and Sharon Slade
Abstract: Over the past decade or so, learning analytics (LA) has matured as a research field and as operational practice within many educational institutions, mostly in the Global North. Learning analytics is commonly defined as the measurement, collection, analysis and use of students’ data to improve students’ learning. Until recently, the main sources of data for LA were restricted to institutional datasets gathered from, for example, learning management systems (LMSs) registration data, etc. Since such data gathering took place within a relatively closed digital ecosystem, institutions held the responsibility to maintain student privacy and to restrict their data collection to that needed to carry out their educational duties. The increasing digitisation and datafication of higher education combined with increased commercialisation of teaching and learning support systems and applications, acts to destabilise this understanding of learning analytics as a digital ecosystem. Given these continuing changes, agreements with platform providers and the roles of social media, applications, plugins, and mobile learning in teaching and learning now prompt us to consider learning analytics as data ecology rather than as a ‘closed’ ecosystem. This paper first maps learning analytics as data ecology before illustrating the need to think differently about its ethical implications.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_25
AI and Narrative Scripts to Educate Adolescents About Social Media Algorithms: Insights About AI Overdependence, Trust and Awareness
Emily Theophilou, Francesco Lomonaco, Gregor Donabauer, Dimitri Ognibene, J. Roberto Sánchez Reina and Davinia Hernandez-Leo
Abstract: Social Media (SM) Artificial Intelligence (AI) algorithms provide users with engaging and personalized content. Yet, the personalization of algorithms may have a negative impact on users as they lack AI literacy. The limited understanding of SM algorithms among the population suggest that adolescents are more likely to place blind trust in the information they consume, exposing them to negative consequences (misinformation, filter bubbles and echo chambers). We therefore proposed an intervention with a blinded for review approach to raise awareness of AI algorithms in SM. To foster an authentic learning experience and question adolescents’ trust in AI, we utilize a low-accuracy AI image classifier. A quasi-experimental study was conducted among 144 high-school students in Barcelona, Spain. The results show that the intervention improved students’ knowledge of SM algorithms and shaped more critical attitudes towards them. A comparison of students’ choices between human suggestions and those produced by a low-accuracy AI classifier shows a lack of AI overdependence. Information about suggestions’ source did not affect students’ trust or learning about AI. These findings contribute towards SM algorithms education and share insight into the effect of deploying low-accuracy detectors in learning technology interventions.
📄 Read More: https://link.springer.com/chapter/10.1007/978-3-031-42682-7_28