Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision
Feed forward
Feedforward neural network
Computational model
Network model
DOI:
10.1371/journal.pcbi.1008215
Publication Date:
2020-10-02T17:39:29Z
AUTHORS (5)
ABSTRACT
Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling limited resources over time, so might boost the performance a physically finite brain or model. Here we show: (1) convolutional outperform matched their number parameters large-scale recognition tasks on natural images. (2) Setting confidence threshold, at which computations terminate decision is made, flexible trading speed for accuracy. At given model expends more time energy images that are harder to recognise, without requiring additional deeper computations. (3) model's reaction an image predicts human same better than several parameter-matched state-of-the-art models. (4) Across thresholds, emulates behaviour control it achieves accuracy approximately cost (mean floating-point operations). However, can be run longer (higher threshold) then outperforms comparison These results suggest connectivity, hallmark biological systems, may essential understanding accuracy, flexibility, dynamics recognition.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (69)
CITATIONS (88)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....