Improving out-of-distribution generalization via multi-task self-supervised pretraining
FOS: Computer and information sciences
Computer Science - Machine Learning
03 medical and health sciences
0302 clinical medicine
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2003.13525
Publication Date:
2020-01-01
AUTHORS (5)
ABSTRACT
Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, domain generalization in computer vision. introduce a new pretext task of predicting responses Gabor filter banks demonstrate multi-task compatible tasks improves performance as compared training individual alone. Features learnt through self-supervision obtain unseen domains when their counterpart there is larger shift between test distributions even localization ability objects interest. can also combined with other methods further boost performance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....