Investigating algorithmic bias in student progress monitoring

Learning Analytics Grading (engineering) Odds
DOI: 10.1016/j.caeai.2024.100267 Publication Date: 2024-07-18T08:46:42Z
ABSTRACT
This research investigates bias in AI algorithms used for monitoring student progress, specifically focusing on related to age, disability, and gender. The study is motivated by incidents such as the UK A-level grading controversy, which demonstrated real-world implications of biased algorithms. Using Open University Learning Analytics Dataset, evaluates fairness with metrics like ABROCA, Average Odds Difference, Equality Opportunity Difference. analysis structured into three experiments. first experiment examines an attribute data sources reveals that institutional primary contributor model discrimination, followed Virtual Environment data, while assessment least biased. In second experiment, introduces Optimal Time Index, pinpoints Day 60 average 255-day course optimal time predicting outcomes, balancing timely interventions, accuracy, efficient resource allocation. third implements mitigation strategies throughout model's life cycle, achieving without compromising accuracy. Finally, this Student Progress Card, designed provide actionable personalized feedback each student.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (35)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....