A case for multiple and parallel RRAMs as synaptic model for training SNNs

FOS: Computer and information sciences Emerging Technologies (cs.ET) 0103 physical sciences Computer Science - Emerging Technologies Computer Science - Neural and Evolutionary Computing Neural and Evolutionary Computing (cs.NE) 01 natural sciences
DOI: 10.48550/arxiv.1803.04773 Publication Date: 2018-01-01
ABSTRACT
To enable a dense integration of model synapses in spiking neural networks hardware, various nano-scale devices are being considered. Such device, besides exhibiting spike-time dependent plasticity (STDP), needs to be highly scalable, have large endurance and require low energy for transitioning between states. In this work, we first introduce empirically determine two new specifications an synapse SNNs: number conductance levels per maximum learning-rate. the best our knowledge, there no RRAMs that meet latter specification. As solution, propose use multiple PCMO-RRAMs parallel within synapse. While synaptic reading, all simultaneously read each conductance-change event, mechanism STDP is initiated only one RRAM, randomly picked from set. Second, validate experimentally demonstrate PCMO-RRAM then show due learning-rate, single fails training SNN. anticipated, network improves as more added Fourth, discuss circuit-requirements implementing such scheme, conclude requirements bounds. Thus, work presents trainable SNNs, indicates shortcomings state-of-art contenders, provides solution extrinsically discusses peripheral circuitry implements solution.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....