Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning

Racket
DOI: 10.1371/journal.pone.0265808 Publication Date: 2022-05-11T17:59:23Z
ABSTRACT
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety learning rules, with varying degrees biological realism. Most these not tested dynamic visual where must make predictions on future states and adjust their behavior accordingly. The rules are often treated as black boxes, little analysis circuit architectures mechanisms supporting optimal performance. Here we developed visual/motor network them play virtual racket-ball game several reinforcement algorithms inspired by the dopaminergic reward system. We systematically investigated how different circuit-motifs (feed-forward, recurrent, feedback) contributed also new biologically-inspired rule that significantly enhanced performance, while reducing training time. Our included areas encoding inputs relaying information motor areas, which used this learn move racket hit ball. Neurons early area relayed object location motion direction across network. Neuronal association encoded spatial relationships between objects scene. Motor populations received from representing dorsal pathway. Two neurons generated commands up or down. Model-generated actions updated environment triggered punishment signals adjusted synaptic weights so could led reward. demonstrate our biologically-plausible were effective solve problems environments. dissect most for learning. model shows involving neural circuits produce similar performance sensory-motor tasks. In networks, all may complement one another, accelerating capabilities animals. Furthermore, highlights resilience redundancy systems.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (129)
CITATIONS (3)