A Curriculum-Style Self-Training Approach for Source-Free Semantic Segmentation

被引:0
作者
Wang, Yuxi [1 ]
Liang, Jian [2 ,3 ]
Zhang, Zhaoxiang [1 ,2 ,3 ]
机构
[1] Chinese Academy Sci CAIR HKISI CAS, Hong Kong Inst Sci & Innovat, Ctr Artificial Intelligence & Robot, Hong Kong, Peoples R China
[2] Chinese Acad Sci CASIA, Inst Automat, New Lab Pattern Recognit NLPR, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci UCAS, Beijing 100190, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Domain adaptation; feature alignment; information propagation; negative pseudo labeling; source data-free; DOMAIN ADAPTATION;
D O I
10.1109/TPAMI.2024.3432168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Source-free domain adaptation has developed rapidly in recent years, where the well-trained source model is adapted to the target domain instead of the source data, offering the potential for privacy concerns and intellectual property protection. However, a number of feature alignment techniques in prior domain adaptation methods are not feasible in this challenging problem setting. Thereby, we resort to probing inherent domain-invariant feature learning and propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation. In particular, we introduce a curriculum-style entropy minimization method to explore the implicit knowledge from the source model, which fits the trained source model to the target data using certain information from easy-to-hard predictions. We then train the segmentation network by the proposed complementary curriculum-style self-training, which utilizes the negative and positive pseudo labels following the curriculum-learning manner. Although negative pseudo-labels with high uncertainty cannot be identified with the correct labels, they can definitely indicate absent classes. Moreover, we employ an information propagation scheme to further reduce the intra-domain discrepancy within the target domain, which could act as a standard post-processing method for the domain adaptation field. Furthermore, we extend the proposed method to a more challenging black-box source model scenario where only the source model's predictions are available. Extensive experiments validate that our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions datasets.
引用
收藏
页码:9890 / 9907
页数:18
相关论文
共 168 条
  • [1] Unsupervised Multi-source Domain Adaptation Without Access to Source Data
    Ahmed, Sk Miraj
    Raychaudhuri, Dripta S.
    Paul, Sujoy
    Oymak, Samet
    Roy-Chowdhury, Amit K.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10098 - 10107
  • [2] Berthelot D, 2019, P INT C LEARN REPR
  • [3] Bousmalis K, 2016, ADV NEUR IN, V29
  • [4] GAIA: A Transfer Learning System of Object Detection that Fits Your Needs
    Bu, Xingyuan
    Peng, Junran
    Yan, Junjie
    Tan, Tieniu
    Zhang, Zhaoxiang
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 274 - 283
  • [5] Cao H., 2024, IEEE T INTELL VEHICL, DOI [10.1109/TIV.2024.3383157.41B, DOI 10.1109/TIV.2024.3383157.41B]
  • [6] AutoDIAL: Automatic DomaIn Alignment Layers
    Carlucci, Fabio Maria
    Porzi, Lorenzo
    Caputo, Barbara
    Ricci, Elisa
    Bulo, Samuel Rota
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5077 - 5085
  • [7] All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation
    Chang, Wei-Lun
    Wang, Hui-Po
    Peng, Wen-Hsiao
    Chiu, Wei-Chen
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1900 - 1909
  • [8] Contrastive Test-Time Adaptation
    Chen, Dian
    Wang, Dequan
    Darrell, Trevor
    Ibrahimi, Sayna
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 295 - 305
  • [9] Domain Adaptation for Semantic Segmentation with Maximum Squares Loss
    Chen, Minghao
    Xue, Hongyang
    Cai, Deng
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2090 - 2099
  • [10] Chen R., 2022, arXiv