V1T: large-scale mouse V1 response prediction using a Vision Transformer

Computational model
DOI: 10.48550/arxiv.2302.03023 Publication Date: 2023-01-01
ABSTRACT
Accurate predictive models of the visual cortex neural response to natural stimuli remain a challenge in computational neuroscience. In this work, we introduce V1T, novel Vision Transformer based architecture that learns shared and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary outperform previous convolution-based by more than 12.7% prediction performance. Moreover, show self-attention weights learned correlate with population receptive fields. Our thus sets new benchmark for can be used jointly recordings reveal meaningful characteristic features cortex.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....