Optimised weight programming for analogue memory-based deep neural networks
High memory
DOI:
10.1038/s41467-022-31405-1
Publication Date:
2022-06-30T10:03:34Z
AUTHORS (16)
ABSTRACT
Abstract Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training improvements circuits, architectures, memory devices. Optimal translation of software-trained weights into analogue hardware weights—given the plethora complex non-idealities—represents an equally important task. We report a generalised computational framework that automates crafting weight programming strategies minimise accuracy degradations during inference, particularly over time. The is agnostic network structure generalises well across recurrent, convolutional, transformer networks. As highly flexible numerical heuristic, approach accommodates arbitrary device-level complexity, making it potentially relevant for variety memories. By quantifying limit achievable inference accuracy, also enables accelerators reach their full potential.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (40)
CITATIONS (34)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....