Interpretable Dropout Prediction: Towards XAI-Based Personalized Intervention
Dropout (neural networks)
Interpretability
Dropout (neural networks)
Remedial education
DOI:
10.1007/s40593-023-00331-8
Publication Date:
2023-03-14T21:28:56Z
AUTHORS (2)
ABSTRACT
Abstract Student drop-out is one of the most burning issues in STEM higher education, which induces considerable social and economic costs. Using machine learning tools for early identification students at risk dropping out has gained a lot interest recently. However, there been little discussion on dropout prediction using interpretable (IML) explainable artificial intelligence (XAI) tools.In this work, data large public Hungarian university, we demonstrate how IML XAI can support educational stakeholders prediction. We show that complex models – such as CatBoost classifier efficiently identify at-risk relying solely pre-enrollment achievement measures, however, they lack interpretability. Applying tools, permutation importance (PI), partial dependence plot (PDP), LIME, SHAP values, predictions be explained both globally locally. Explaining individual opens up great opportunities personalized intervention, example by offering right remedial courses or tutoring sessions. Finally, present results user study evaluates whether education find these useful.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (55)
CITATIONS (34)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....