Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation

Domain Adaptation Feature (linguistics) Code (set theory)
DOI: 10.1016/j.jag.2023.103646 Publication Date: 2024-01-05T20:51:58Z
ABSTRACT
Remote sensing (RS) image semantic segmentation using deep convolutional neural networks (DCNNs) has shown great success in various applications. However, the high dependence on annotated data makes it challenging for DCNNs to adapt different RS scenes. To address this challenge, we propose a cross-domain task that considers ground sampling distance, remote sensor variation, and geographical landscapes as main factors causing domain shifts between source target images. mitigate negative impact of shift, self-training guided disentangled adaptation network (ST-DASegNet) consists student backbones extract source-style target-style features. align single-style features, adopt feature-level adversarial learning. We also module (DDM) universal distinct features from single-domain cross-style Finally, fuse these generate predictions decoders. Moreover, employ an exponential moving average (EMA) based separated mechanism ease instability disadvantageous effect during optimization. Our experiments several prominent datasets (Potsdam, Vaihingen, LoveDA) demonstrate ST-DASegNet outperforms previous methods achieves new state-of-the-art results. Visualization analysis confirm interpretability ST-DASegNet. The code is publicly available at https://github.com/cv516Buaa/ST-DASegNet.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (88)
CITATIONS (9)