A Noisy Sample Selection Framework Based on a Mixup Loss and Recalibration Strategy

Hyperparameter Robustness Benchmark (surveying)
DOI: 10.3390/math12152389 Publication Date: 2024-07-31T21:16:49Z
ABSTRACT
Deep neural networks (DNNs) have achieved breakthrough progress in various fields, largely owing to the support of large-scale datasets with manually annotated labels. However, obtaining such is costly and time-consuming, making high-quality annotation a challenging task. In this work, we propose an improved noisy sample selection method, termed “sample framework”, based on mixup loss recalibration strategy (SMR). This framework enhances robustness generalization abilities models. First, introduce robust function pre-train two models identical structures separately. approach avoids additional hyperparameter adjustments reduces need for prior knowledge noise types. Additionally, use Gaussian Mixture Model (GMM) divide entire training set into labeled unlabeled subsets, followed by using semi-supervised learning (SSL) techniques. Furthermore, cross-entropy (CE) prevent from converging local optima during SSL process, thus further improving performance. Ablation experiments CIFAR-10 50% symmetric 40% asymmetric demonstrate that modules introduced paper improve accuracy baseline (i.e., DivideMix) 1.5% 0.5%, respectively. Moreover, experimental results multiple benchmark our proposed method effectively mitigates impact labels significantly performance DNNs datasets. For instance, WebVision dataset, improves top-1 0.7% 2.4% compared method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (44)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....