Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
FOS: Computer and information sciences
field programmable gate array (FPGA)
Artificial Intelligence (cs.AI)
Spiking neural networks
Computer Science - Artificial Intelligence
Hardware Architecture (cs.AR)
neuromorphic hardware
Computer Science - Neural and Evolutionary Computing
Electrical engineering. Electronics. Nuclear engineering
Neural and Evolutionary Computing (cs.NE)
Computer Science - Hardware Architecture
supervised learning
TK1-9971
DOI:
10.48550/arxiv.2305.19468
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is first network to have end-to-end multi-layer online local supervised training without using gradients and has combined adaptation weights thresholds in hierarchical structure. research shows that architecture can be implemented efficiently on a large scale hardware. The consists (SNN) individual modules for each layer enable self-learning back-propagation. By simple adaptive selection thresholds, Winner-Takes-All (WTA) constraint layer, modified weight update rule more amenable hardware, trainer module allocates neuronal resources optimally at having pass high-precision error measurements across layers. All elements system, including module, interact event-based binary spikes. hardware-optimized shown preserve performance original algorithm multiple spatial-temporal classification problems with significantly reduced requirements.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....