Memory-Enhanced Confidence Calibration for Class-Incremental Unsupervised Domain Adaptation

被引:0
|
作者
Yu, Jiaping [1 ]
Yang, Muli [2 ]
Wu, Aming [1 ]
Deng, Cheng [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[2] ASTAR, Inst Infocomm Res I2R, Singapore 138632, Singapore
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Data models; Training; Adaptation models; Calibration; Feature extraction; Character recognition; Accuracy; Tail; Predictive models; Incremental learning; Unsupervised domain adaptation; class incremental learning; image recognition; causality;
D O I
10.1109/TMM.2024.3521834
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we focus on Class-Incremental Unsupervised Domain Adaptation (CI-UDA), where the labeled source domain already includes all classes, and the classes in the unlabeled target domain emerge sequentially over time. This task involves addressing two main challenges. The first is the domain gap between the labeled source data and the unlabeled target data, which leads to weak generalization performance. The second is the inconsistency between the source and target category spaces at each time step, which causes catastrophic forgetting during the testing stage. Previous methods focus solely on the alignment of similar samples from different domains, which overlooks the underlying causes of the domain gap/class distribution difference. To tackle the issue, we rethink this task from a causal perspective for the first time. We first build a structural causal graph to describe the CI-UDA problem. Based on the causal graph, we present Memory-Enhanced Confidence Calibration (MECC), which aims to improve confidence in the predicted results. In particular, we argue that the domain discrepancy caused by the different styles is prone to make the model produce less confident predictions and thus weakens the generalization and continual learning abilities. To this end, we first explore using the gram matrix to generate source-style target data, which is combined with the original data to jointly train the model and thereby reduce the domain-shift impact. Second, we utilize the model of the previous time step to select corresponding samples that are used to build a memory bank, which is instrumental in alleviating catastrophic forgetting. Extensive experimental results on multiple datasets demonstrate the superiority of our method.
引用
收藏
页码:610 / 621
页数:12
相关论文
共 50 条
  • [1] Class-Incremental Unsupervised Domain Adaptation via Pseudo-Label Distillation
    Wei, Kun
    Yang, Xu
    Xu, Zhe
    Deng, Cheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1188 - 1198
  • [2] Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation
    Lin, Hongbin
    Zhang, Yifan
    Qiu, Zhen
    Niu, Shuaicheng
    Gan, Chuang
    Liu, Yanxia
    Tan, Mingkui
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 351 - 368
  • [3] Learnable Distribution Calibration for Few-Shot Class-Incremental Learning
    Liu, Binghao
    Yang, Boyu
    Xie, Lingxi
    Wang, Ren
    Tian, Qi
    Ye, Qixiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12699 - 12706
  • [4] Unsupervised Domain Adaptation With Class-Aware Memory Alignment
    Wang, Hui
    Zheng, Liangli
    Zhao, Hanbin
    Li, Shijian
    Li, Xi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 9930 - 9942
  • [5] Memory Efficient Class-Incremental Learning for Image Classification
    Zhao, Hanbin
    Wang, Hui
    Fu, Yongjian
    Wu, Fei
    Li, Xi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5966 - 5977
  • [6] DCL: Dipolar Confidence Learning for Source-Free Unsupervised Domain Adaptation
    Tian, Qing
    Sun, Heyang
    Peng, Shun
    Zheng, Yuhui
    Wan, Jun
    Lei, Zhen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4342 - 4353
  • [7] Class-Incremental Learning with Topological Schemas of Memory Spaces
    Chang, Xinyuan
    Tao, Xiaoyu
    Hong, Xiaopeng
    Wei, Xing
    Ke, Wei
    Gong, Yihong
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9719 - 9726
  • [8] MoBoo: Memory-Boosted Vision Transformer for Class-Incremental Learning
    Ni, Bolin
    Nie, Xing
    Zhang, Chenghao
    Xu, Shixiong
    Zhang, Xin
    Meng, Gaofeng
    Xiang, Shiming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 11169 - 11183
  • [9] Toward Cross-Domain Class-Incremental Remote Sensing Scene Classification
    Zhang, Li
    Fu, Sichao
    Wang, Wuli
    Ren, Peng
    Peng, Qinmu
    Ren, Guangbo
    Liu, Baodi
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [10] Unsupervised Domain Adaptation Based on Pseudo-Label Confidence
    Fu, Tingting
    Li, Ying
    IEEE ACCESS, 2021, 9 : 87049 - 87057