VisDA: The Visual Domain Adaptation Challenge
Domain Adaptation
Testbed
DOI:
10.48550/arxiv.1710.06924
Publication Date:
2017-01-01
AUTHORS (6)
ABSTRACT
We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains. Unsupervised aims to solve real-world problem of shift, where machine learning models trained on one must be transferred adapted novel without additional supervision. The VisDA2017 challenge is focused simulation-to-reality shift has two associated tasks: image classification segmentation. goal in both tracks first train model simulated, synthetic data source then adapt it perform well real unlabeled test domain. Our largest date cross-domain object classification, with over 280K images 12 categories combined training, validation testing segmentation also 30K 18 three compare VisDA existing datasets provide baseline performance analysis using various that are currently popular field.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....