MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Speedup
Memory footprint
Edge device
DOI:
10.48550/arxiv.2110.14032
Publication Date:
2021-01-01
AUTHORS (16)
ABSTRACT
Recently, a new trend of exploring sparsity for accelerating neural network training has emerged, embracing the paradigm on edge. This paper proposes novel Memory-Economic Sparse Training (MEST) framework targeting accurate and fast execution edge devices. The proposed MEST consists enhancements by Elastic Mutation (EM) Soft Memory Bound (&S) that ensure superior accuracy at high ratios. Different from existing works sparse training, this current work reveals importance schemes performance in terms as well speed real On top that, to employ data efficiency further acceleration training. Our results suggest unforgettable examples can be identified in-situ even during dynamic exploration masks process, therefore removed speedup Comparing with state-of-the-art (SOTA) accuracy, our increases Top-1 significantly ImageNet when using same unstructured scheme. Systematical evaluation speed, memory footprint are conducted, where consistently outperforms representative SOTA works. A reviewer strongly against based his false assumptions misunderstandings. previous submission, we And explore impact model sparsity, schemes, algorithms number removable examples. codes publicly available at: https://github.com/boone891214/MEST.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....