Class conditional distribution alignment for domain adaptation
Feature (linguistics)
Transfer of learning
Feature vector
Conditional probability distribution
Domain Adaptation
DOI:
10.1007/s11768-020-9126-1
Publication Date:
2020-01-23T08:03:06Z
AUTHORS (3)
ABSTRACT
In this paper, we study the problem of domain adaptation, which is a crucial ingredient in transfer learning with two domains, that is, the source domain with labeled data and the target domain with none or few labels. Domain adaptation aims to extract knowledge from the source domain to improve the performance of the learning task in the target domain. A popular approach to handle this problem is via adversarial training, which is explained by the HΔH-distance theory. However, traditional adversarial network architectures just align the marginal feature distribution in the feature space. The alignment of class condition distribution is not guaranteed. Therefore, we proposed a novel method based on pseudo labels and the cluster assumption to avoid the incorrect class alignment in the feature space. The experiments demonstrate that our framework improves the accuracy on typical transfer learning tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (32)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....