Speakers
James Goh
AILYZEStart
End
AI Qualitative Analysis in TEL: Peer-Reviewed Workflows with 90%+ Agreement
📅 Friday 22/05 16:00-19:00h
📍 Workshop Space B
🔎 Needs Analysis
PhD projects in Technology-Enhanced Learning often rely on interviews, open-ended surveys, classroom discourse, or policy documents, but analysis is time-consuming, difficult to scale, and hard to keep transparent. Generative AI looks promising, yet many doctoral researchers are unsure how to use it without weakening rigour, interpretive depth, or ethics.
This workshop meets a practical TEL need: how to use AI for qualitative analysis in a way that reviewers can trust. Participants will learn peer-reviewed workflows where AI-assisted coding has shown strong alignment with human qualitative judgements (90%+ in benchmarked settings) and how to reproduce those results with a clear “researcher-in-the-loop” audit trail. The focus is on defensible qualitative reporting for TEL publications and design cycles.
📒 Session Description
This is a hands-on methodology workshop where you’ll learn peer-reviewed AI workflows with >90% validated agreement and immediately apply them to real TEL-style data.
Flow (90 minutes):
0–10 min: Why AI fails in qualitative research (and how to prevent it): rigour, bias, hallucinations, traceability
10–25 min: Live demo: raw text → coding → themes → segment comparison → evidence-linked reporting
25–70 min: Guided practice (solo + pairs): import data, run inductive/deductive coding, refine themes, verify quotes, compare segments
70–90 min: “Reviewer-ready” wrap-up: transparency checklist + draft your AI-methods paragraph
Participants leave with a reusable workflow, coded outputs, and a publication-friendly reporting structure.
💡 Learning Objectives
By the end of the workshop, you will be able to:
- Run AI-assisted thematic coding using inductive (discovery) or deductive (codebook) approaches
- Keep full interpretive control using structured human review points (what you must decide vs. what AI can draft)
- Validate coding quality using agreement checks, ambiguity flags, and stability tests
- Generate evidence-linked outputs (themes → supporting quotes → source locations) for auditability
- Compare themes across segments (e.g., role, course type, institution, learner group) without losing context
- Draft a reviewer-ready Methods paragraph describing AI use transparently and ethically
- Build a dataset chatbot that answers questions while always grounding claims in quotations
