Convolutional neural networks can decode eye movement data: A black box approach to predicting task from eye movements
Hyperparameter
Time line
DOI:
10.31234/osf.io/5a6jm
Publication Date:
2020-09-23T03:55:52Z
AUTHORS (4)
ABSTRACT
Previous attempts to classify task from eye movement data have relied on model architectures designed emulate theoretically defined cognitive processes, and/or that has been processed into aggregate (e.g., fixations, saccades) or statistical fixation density) features. _Black box_ convolutional neural networks (CNNs) are capable of identifying relevant features in raw and minimally images, but difficulty interpreting these contributed challenges generalizing lab-trained CNNs applied contexts. In the current study, a CNN classifier was used two datasets (Exploratory Confirmatory) which participants searched, memorized, rated indoor outdoor scene images. The Exploratory dataset tune hyperparameters model, resulting architecture re-trained, validated, tested Confirmatory dataset. were formatted timelines (i.e., x-coordinate, y-coordinate, pupil size) To further understand informational value each component data, timeline image broken down subsets with one more components systematically removed. Classification consistently outperformed data. Memorize condition most often confused Search Rate. Pupil size least uniquely informative when compared x- y-coordinates. general pattern results for replicated Overall, present study provides practical reliable black box solution classifying
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....