A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning

Benchmark (surveying) Code (set theory) Memory model Auxiliary memory
DOI: 10.48550/arxiv.2205.13218 Publication Date: 2022-01-01
ABSTRACT
Real-world applications require the classification model to adapt new classes without forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims train a with limited memory size meet this requirement. Typical CIL methods tend save representative exemplars from former resist forgetting, while recent works find that storing models history can substantially boost performance. However, stored are not counted into budget, which implicitly results in unfair comparisons. We when counting total budget and comparing aligned size, saving do consistently work, especially for case budgets. As result, we need holistically evaluate different at scales simultaneously consider accuracy measurement. On other hand, dive deeply construction of buffer efficiency. By analyzing effect layers network, shallow deep have characteristics CIL. Motivated by this, propose simple yet effective baseline, denoted as MEMO Memory-efficient Expandable MOdel. extends specialized based on shared generalized representations, efficiently extracting diverse representations modest cost maintaining exemplars. Extensive experiments benchmark datasets validate MEMO's competitive Code is available at: https://github.com/wangkiw/ICLR23-MEMO
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....