Learning Generalizable Models via Disentangling Spurious and Enhancing Potential Correlations
Spurious relationship
Feature (linguistics)
Perceptron
Sample (material)
DOI:
10.48550/arxiv.2401.05752
Publication Date:
2024-01-01
AUTHORS (5)
ABSTRACT
Domain generalization (DG) intends to train a model on multiple source domains ensure that it can generalize well an arbitrary unseen target domain. The acquisition of domain-invariant representations is pivotal for DG as they possess the ability capture inherent semantic information data, mitigate influence domain shift, and enhance capability model. Adopting perspectives, such sample feature, proves be effective. perspective facilitates data augmentation through manipulation techniques, whereas feature enables extraction meaningful features. In this paper, we focus improving by compelling acquire from both perspectives disentangling spurious correlations enhancing potential correlations. 1) From perspective, develop frequency restriction module, guiding relevant between object features labels, thereby 2) simple Tail Interaction module implicitly enhances among all samples domains, facilitating across experimental results show Convolutional Neural Networks (CNNs) or Multi-Layer Perceptrons (MLPs) with strong baseline embedded these two modules achieve superior results, e.g., average accuracy 92.30% Digits-DG.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....