Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement
Augment
Base (topology)
DOI:
10.48550/arxiv.2303.10891
Publication Date:
2023-01-01
AUTHORS (6)
ABSTRACT
This paper investigates a new, practical, but challenging problem named Non-exemplar Online Class-incremental continual Learning (NO-CL), which aims to preserve the discernibility of base classes without buffering data examples and efficiently learn novel continuously in single-pass (i.e., online) stream. The challenges this task are mainly two-fold: (1) Both suffer from severe catastrophic forgetting as no previous samples available for replay. (2) As online can only be observed once, there is way fully re-train whole model, e.g., re-calibrate decision boundaries via prototype alignment or feature distillation. In paper, we propose Dual-prototype Self-augment Refinement method (DSR) NO-CL problem, consists two strategies: 1) Dual class prototypes: vanilla high-dimensional prototypes exploited utilize pre-trained information obtain robust quasi-orthogonal representations rather than example buffers both privacy preservation memory reduction. 2) refinement: Instead updating network, optimize alternatively with extra projection module based on self-augment prototypes, through bi-level optimization problem. Extensive experiments demonstrate effectiveness superiority proposed DSR NO-CL.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....