Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training

MNIST database Regularization Code (set theory) Labeled data
DOI: 10.48550/arxiv.2006.11280 Publication Date: 2020-01-01
ABSTRACT
Many real-world applications have to tackle the Positive-Unlabeled (PU) learning problem, i.e., binary classifiers from a large amount of unlabeled data and few labeled positive examples. While current state-of-the-art methods employ importance reweighting design various risk estimators, they ignored capability model itself, which could provided reliable supervision. This motivates us propose novel Self-PU framework, seamlessly integrates PU self-training. highlights three "self"-oriented building blocks: self-paced training algorithm that adaptively discovers augments confident positive/negative examples as proceeds; self-calibrated instance-aware loss; self-distillation scheme introduces teacher-students an effective regularization for learning. We demonstrate performance on common benchmarks (MNIST CIFAR-10), compare favorably against latest competitors. Moreover, we study application learning, classifying brain images Alzheimer's Disease. obtains significantly improved results renowned Disease Neuroimaging Initiative (ADNI) database over existing methods. The code is publicly available at: https://github.com/TAMU-VITA/Self-PU.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....