Federated Unsupervised Domain Generalization Using Global and Local Alignment of Gradients

DOI: 10.1609/aaai.v39i19.34197 Publication Date: 2025-04-11T13:01:21Z
ABSTRACT
We address the problem of federated domain generalization in an unsupervised setting for first time. theoretically establish a connection between shift and alignment gradients learning show that aligning at both client server levels can facilitate model to new (target) domains. Building on this insight, we propose novel method named FedGaLA, which performs gradient level encourage clients learn domain-invariant features, as well global obtain more generalized aggregated model. To empirically evaluate our method, perform various experiments four commonly used multi-domain datasets, PACS, OfficeHome, DomainNet, TerraInc. The results demonstrate effectiveness outperforms comparable baselines. Ablation sensitivity studies impact different components parameters approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....