Approximate number representations emerge in object-trained convolutional neural networks and show human-like signatures of number discrimination
Numerosity adaptation effect
Stimulus (psychology)
DOI:
10.1167/jov.20.11.1120
Publication Date:
2020-10-26T17:56:31Z
AUTHORS (2)
ABSTRACT
What are the visual input analyzers that yield numerosity representations? Recent work from Nasr et al. (2019) provides an interesting possibility: representations implemented by same cortical networks can classify objects. They found a convolutional neural network trained on object classification had units with tuning curves for numerosity, similar to neurons in primate parietal and frontal cortex. Here, we extend these findings, examining whether network’s number tolerant stimulus variation show signatures of human perception. We recorded responses dot displays each unit AlexNet 1000-way classification. A subset gaussian number, wider higher preferred numerosities. Tuning were stable across sets controlling surface area, density, convex hull, total circumference, radius. Extending previous also observed even maintained textured (for example, fur-textured dots grass-textured background). These results replicated another architecture (VGG16) critically not evident untrained network. Next, tested AlexNet’s was susceptible grouping effects system. Both humans underestimated grouped into pairs relative randomly arranged dots. created images which lines connected dots, decreasing continuous Like humans, decreased as connected. Altogether, indicate recognition gain robust representations. Moreover, influenced spatial connectedness, matching properties behavior. support view untangle categories retinal approximate
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....