Unsupervised global-local domain adaptation with self-training for remote sensing image semantic segmentation

被引:0
作者
Zhang, Junbo [1 ]
Li, Zhiyong [1 ]
Wang, Mantao [1 ]
Li, Kunhong [1 ]
机构
[1] Sichuan Agr Univ, Coll Informat Engn, Yaan, Peoples R China
关键词
Remote sensing; domain adaptation; adversarial training; self-training;
D O I
10.1080/01431161.2025.2450564
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Unsupervised domain adaptation (UDA) techniques have the potential to enhance the transferability of neural network models in unknown scenarios and reduce the labelling costs associated with unlabelled datasets. Popular solutions to this challenging UDA task are adversarial training and self-training. However, current adversarial-based UDA methods emphasize only global or local feature alignment, which is insufficient for tackling the domain shift. In addition, self-training-based methods inevitably produce many wrong pseudo labels on the target domain due to bias towards the source domain. To tackle the above problems, this paper proposes a hybrid training framework that integrates global-local adversarial training and self-training strategies to effectively tackle global-local domain shift. First, the adversarial approach measures the discrepancies between domains from domain and category-level perspectives. The adversarial network incorporates discriminators at the local-category and global-domain levels, thereby facilitating global-local feature alignment through multi-level adversarial training. Second, the self-training strategy is integrated to acquire domain-specific knowledge, effectively mitigating negative migration. By combining these two domain adaptation strategies, we present a more efficient approach for mitigating the domain gap. Finally, a self-labelling mechanism is introduced to directly explore the inherent distribution of pixels, allowing for the rectification of pseudo labels generated during the self-training stage. Compared to state-of-the-art UDA methods, the proposed method gains $3.2\% $3.2%, $1.21\% $1.21%, $5.86\% $5.86%, $6.16\% $6.16% mIoU improvements on Rural $ \to $-> Urban, Urban $ \to $-> Rural, Potsdam $ \to $-> Vaihingen, Vaihingen $ \to $-> Potsdam, respectively.
引用
收藏
页码:2254 / 2284
页数:31
相关论文
共 50 条
[11]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[12]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[13]  
Hoffman J., 2016, PREPRINT
[14]  
ITC Markus Gerke, 2014, USE STAIR VISION LIB
[15]   Few-Shot Segmentation via Divide-and-Conquer Proxies [J].
Lang, Chunbo ;
Cheng, Gong ;
Tu, Binfei ;
Han, Junwei .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (01) :261-283
[16]   Base and Meta: A New Perspective on Few-Shot Segmentation [J].
Lang, Chunbo ;
Cheng, Gong ;
Tu, Binfei ;
Li, Chao ;
Han, Junwei .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) :10669-10686
[17]   An Edge Embedded Marker-Based Watershed Algorithm for High Spatial Resolution Remote Sensing Image Segmentation [J].
Li, Deren ;
Zhang, Guifeng ;
Wu, Zhaocong ;
Yi, Lina .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2010, 19 (10) :2781-2787
[18]   Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation [J].
Li, Ruihuang ;
Li, Shuai ;
He, Chenhang ;
Zhang, Yabin ;
Jia, Xu ;
Zhang, Lei .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :11583-11593
[19]   Unsupervised Domain Adaptation for Remote Sensing Semantic Segmentation with Transformer [J].
Li, Weitao ;
Gao, Hui ;
Su, Yi ;
Momanyi, Biffon Manyura .
REMOTE SENSING, 2022, 14 (19)
[20]   Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation [J].
Li, Yansheng ;
Shi, Te ;
Zhang, Yongjun ;
Chen, Wei ;
Wang, Zhibin ;
Li, Hao .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 175 :20-33