Adversarial style discrepancy minimization for unsupervised domain adaptation

Benchmark (surveying) Domain Adaptation Feature (linguistics) Minification Feature Learning
DOI: 10.1016/j.neunet.2022.10.015 Publication Date: 2022-10-22T15:41:48Z
ABSTRACT
Mainstream unsupervised domain adaptation (UDA) methods align feature distributions across different domains via adversarial learning. However, most of them focus on global distribution alignment, ignoring the fine-grained discrepancy. Besides, they generally require auxiliary models, bringing extra computation costs. To tackle these issues, this study proposes an UDA method that differentiates individual samples without help models. end, we introduce a novel discrepancy metric, termed style discrepancy, to distinguish target samples. We also propose paradigm for minimization (ASDM). Specifically, fix parameters extractor and maximize update classifier, which helps detect more hard Adversely, classifier minimize extractor, pushing those near support source distribution. Such adversary progressively adapt samples, leading adaptation. Experiments tasks validate effectiveness ASDM. Overall, any ASDM reaches 46.9% mIoU in GTA5 Cityscapes benchmark 84.7% accuracy VisDA-2017 benchmark, outperforming many existing adversarial-learning-based methods.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (45)
CITATIONS (19)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....