unraveling students interaction around a tangible interface using multimodal learning analytics

machine learning 4. Education 05 social sciences collaborative learning Kinect sensor tangible user interface information retrieval techniques 0503 education
DOI: 10.5281/zenodo.3554730 Publication Date: 2015-10-18
ABSTRACT
The file is in PDF format. If your computer does not recognize it, simply download the file and then open it with your browser.<br/>In this paper, we describe multimodal learning analytics (MMLA) techniques to analyze data collected around an interactive learning environment. In a previous study (Schneider and Blikstein, 2015), we designed and evaluated a Tangible User Interface (TUI) where dyads (i.e., pairs) of students were asked to learn about the human auditory system by reconstructing it. In the current study, we present the analysis of the data collected in the form of logs, both from students' interaction with the tangible interface as well as from their gestures, and we describe how we extracted meaningful predictors for student learning from these two datasets. First we show how information retrieval techniques can be used on the tangible interface logs to predict learning gains. Second, we explored how KinectTM data can inform "in-situ" interactions around a tabletop by using clustering algorithms to find prototypical body positions. Finally, we fed those features to a machine-learning classifier (Support Vector Machine) and divided students in two groups after performing a median split on their learning scores. We found that we were able to predict students' learning gains (i.e., being above or belong the median split) with very high accuracy. We discuss the implications of these results for analyzing rich data from multimodal learning environments.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....