Mapping between fMRI responses to movies and their natural language annotations
FOS: Computer and information sciences
0301 basic medicine
Computer Science - Machine Learning
Brain Mapping
Computer Science - Computation and Language
Motion Pictures
Brain
Magnetic Resonance Imaging
Semantics
Machine Learning (cs.LG)
03 medical and health sciences
Quantitative Biology - Neurons and Cognition
FOS: Biological sciences
Image Processing, Computer-Assisted
Humans
Neurons and Cognition (q-bio.NC)
Computation and Language (cs.CL)
Language
Natural Language Processing
DOI:
10.1016/j.neuroimage.2017.06.042
Publication Date:
2017-06-23T10:00:41Z
AUTHORS (11)
ABSTRACT
Several research groups have shown how to correlate fMRI responses to the meanings of presented stimuli. This paper presents new methods for doing so when only a natural language annotation is available as the description of the stimulus. We study fMRI data gathered from subjects watching an episode of BBCs Sherlock [1], and learn bidirectional mappings between fMRI responses and natural language representations. We show how to leverage data from multiple subjects watching the same movie to improve the accuracy of the mappings, allowing us to succeed at a scene classification task with 72% accuracy (random guessing would give 4%) and at a scene ranking task with average rank in the top 4% (random guessing would give 50%). The key ingredients are (a) the use of the Shared Response Model (SRM) and its variant SRM-ICA [2, 3] to aggregate fMRI data from multiple subjects, both of which are shown to be superior to standard PCA in producing low-dimensional representations for the tasks in this paper; (b) a sentence embedding technique adapted from the natural language processing (NLP) literature [4] that produces semantic vector representation of the annotations; (c) using previous timestep information in the featurization of the predictor data.<br/>19 pages, 9 figures, in submission to NeuroImage. Prior version presented at MLINI-2016 workshop, 2016 (arXiv:1701.01437) and ICML 2016 Workshop on Multi-view Representation Learning<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (25)
CITATIONS (66)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....