Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

被引:6
作者
Gomez, Jose L. [1 ,2 ]
Villalonga, Gabriel [1 ]
Lopez, Antonio M. [1 ,2 ]
机构
[1] Univ Autonoma Barcelona UAB, Comp Vis Ctr CVC, Bellaterra 08193, Spain
[2] Univ Autonoma Barcelona UAB, Comp Sci Dept, Bellaterra 08193, Spain
关键词
domain adaptation; semi-supervised learning; semantic segmentation; autonomous driving;
D O I
10.3390/s23020621
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Semantic image segmentation is a core task for autonomous driving, which is performed by deep models. Since training these models draws to a curse of human-based image labeling, the use of synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies addressing an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It performs iterations where the (unlabeled) real-world training images are labeled by intermediate deep models trained with both the (labeled) synthetic images and the real-world ones labeled in previous iterations. More specifically, a self-training stage provides two domain-adapted models and a model collaboration loop allows the mutual improvement of these two models. The final semantic segmentation labels (pseudo-labels) for the real-world images are provided by these two models. The overall procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for onboard semantic segmentation. Our procedure shows improvements ranging from approximately 13 to 31 mIoU points over baselines.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] Unsupervised domain adaptation for semantic segmentation via cross-region alignment
    Wang, Zhijie
    Liu, Xing
    Suganuma, Masanori
    Okatani, Takayuki
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 234
  • [22] A multi-grained unsupervised domain adaptation approach for semantic segmentation
    Li, Luyang
    Ma, Tai
    Lu, Yue
    Li, Qingli
    He, Lianghua
    Wen, Ying
    PATTERN RECOGNITION, 2023, 144
  • [23] Temporal Consistency as Pretext Task in Unsupervised Domain Adaptation for Semantic Segmentation
    Barbosa, Felipe
    Osorio, Fernando
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2025, 111 (01)
  • [24] Per-class curriculum for Unsupervised Domain Adaptation in semantic segmentation
    Alcover-Couso, Roberto
    Sanmiguel, Juan C.
    Escudero-Vinolo, Marcos
    Carballeira, Pablo
    VISUAL COMPUTER, 2025, 41 (02) : 901 - 919
  • [25] Multi-modal unsupervised domain adaptation for semantic image segmentation
    Hu, Sijie
    Bonardi, Fabien
    Bouchafa, Samia
    Sidibe, Desire
    PATTERN RECOGNITION, 2023, 137
  • [26] Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images
    Benjdira, Bilel
    Bazi, Yakoub
    Koubaa, Anis
    Ouni, Kais
    REMOTE SENSING, 2019, 11 (11)
  • [27] Unsupervised domain adaptation alignment method for cross-domain semantic segmentation of remote sensing images
    Shen Z.
    Ni H.
    Guan H.
    Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2023, 52 (12): : 1 - 2
  • [28] Benchmarking domain adaptation for semantic segmentation
    Ahmed, Masud
    Hasan, Zahid
    Khan, Naima
    Roy, Nirmalya
    Purushotham, Sanjay
    Gangopadhyay, Aryya
    You, Suya
    UNMANNED SYSTEMS TECHNOLOGY XXIV, 2022, 12124
  • [29] Partial Domain Adaptation on Semantic Segmentation
    Tian, Yingjie
    Zhu, Siyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (06) : 3798 - 3809
  • [30] Domain-Agnostic Priors for Semantic Segmentation Under Unsupervised Domain Adaptation and Domain Generalization
    Huo, Xinyue
    Xie, Lingxi
    Hu, Hengtong
    Zhou, Wengang
    Li, Houqiang
    Tian, Qi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 3954 - 3976