Incremental Self-training for Semi-supervised Learning
FOS: Computer and information sciences
Computer Science - Machine Learning
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2404.12398
Publication Date:
2024-04-14
AUTHORS (4)
ABSTRACT
Semi-supervised learning provides a solution to reduce the dependency of machine on labeled data. As one efficient semi-supervised techniques, self-training (ST) has received increasing attention. Several advancements have emerged address challenges associated with noisy pseudo-labels. Previous works acknowledge importance unlabeled data but not delved into their utilization, nor they paid attention problem high time consumption caused by iterative learning. This paper proposes Incremental Self-training (IST) for fill these gaps. Unlike ST, which processes all indiscriminately, IST in batches and priority assigns pseudo-labels samples certainty. Then, it around decision boundary after model is stabilized, enhancing classifier performance. Our simple yet effective fits existing self-training-based methods. We verify proposed five datasets two types backbone, effectively improving recognition accuracy speed. Significantly, outperforms state-of-the-art competitors three challenging image classification tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....