Reducing the Unlabeled Sample Complexity of Semi-Supervised Multi-View Learning
Sample complexity
Regularization
Sample (material)
Labeled data
Supervised Learning
DOI:
10.1145/2783258.2783409
Publication Date:
2015-08-07T15:38:27Z
AUTHORS (2)
ABSTRACT
In semi-supervised multi-view learning, unlabeled sample complexity (u.s.c.) specifies the size of training that guarantees a desired learning error. this paper, we improve state-of-art u.s.c. from O(1/ε) to O(log 1/ε) for small error ε, under mild conditions. To obtain improved result, as primary step prove connection between generalization classifier and its incompatibility, which measures fitness distribution. We then with sufficiently large sample, one is able find classifiers low incompatibility. Combining two observations, manage probably approximately correct (PAC) style bound learning. empirically verified our theory by designing proof-of-concept algorithms, based on active view sensing other online co-regularization, real-world data sets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (30)
CITATIONS (8)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....