Demonstration of transfer learning using 14 nm technology analog ReRAM array
Initialization
Transfer of learning
Edge device
DOI:
10.3389/felec.2023.1331280
Publication Date:
2024-01-15T04:32:28Z
AUTHORS (46)
ABSTRACT
Analog memory presents a promising solution in the face of growing demand for energy-efficient artificial intelligence (AI) at edge. In this study, we demonstrate efficient deep neural network transfer learning utilizing hardware and algorithm co-optimization an analog resistive random-access (ReRAM) array. For first time, illustrate that open-loop (DNN) image classification tasks, convergence rates can be accelerated by approximately 3.5 times through utilization co-optimized ReRAM hardware-aware Tiki-Taka v2 (TTv2) algorithm. A simulation based on statistical 14 nm CMOS array data provides insights into performance larger workloads, exhibiting notable improvement over conventional training with random initialization. This study shows DNN using optimized achieve faster smaller dataset compared to from scratch, thus augmenting AI capability
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (42)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....