Neural state space alignment for magnitude generalization in humans and recurrent networks
Adult
Male
0301 basic medicine
0303 health sciences
Transfer, Psychology
Models, Neurological
Brain
Electroencephalography
Generalization, Psychological
Machine Learning
Young Adult
03 medical and health sciences
Humans
Female
Neural Networks, Computer
Algorithms
Psychomotor Performance
Size Perception
DOI:
10.1016/j.neuron.2021.02.004
Publication Date:
2021-02-23T22:09:58Z
AUTHORS (5)
ABSTRACT
SummaryA prerequisite for intelligent behaviour is to understand how stimuli are related and to generalise this knowledge across contexts. Generalisation can be challenging when relational patterns are shared across contexts but exist on different physical scales. Here, we studied neural representations in humans and recurrent neural networks performing a magnitude comparison task, for which it was advantageous to generalise concepts of “more” or “less” between contexts. Using multivariate analysis of human brain signals and of neural network hidden unit activity, we observed that both systems developed parallel neural “number lines” for each context. In both model systems, these number state spaces were aligned in a way that explicitly facilitated generalisation of relational concepts (more and less). These findings suggest a previously overlooked role for neural normalisation in supporting transfer of a simple form of abstract relational knowledge (magnitude) in humans and machine learning systems.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (41)
CITATIONS (58)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....