Transferable adversarial masked self-distillation for unsupervised domain adaptation

被引:7
|
作者
Xia, Yuelong [1 ,2 ]
Yun, Li-Jun [1 ,2 ]
Yang, Chengfu [1 ,2 ]
机构
[1] Yunnan Normal Univ, Sch Informat Sci & Technol, Kunming 650500, Peoples R China
[2] Engn Res Ctr Comp Vis & Intelligent Control Techno, Dept Educ Yunnan Prov, Kunming 650500, Peoples R China
关键词
Unsupervised domain adaptation; Masked self-distillation; Masked image modeling; Adversarial weighted cross-domain adaptation;
D O I
10.1007/s40747-023-01094-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to a related unlabeled target domain. Most existing works focus on minimizing the domain discrepancy to learn global domain-invariant representation using CNN-based architecture while ignoring both transferable and discriminative local representation, e.g, pixel-level and patch-level representation. In this paper, we propose the Transferable Adversarial Masked Self-distillation based on Vision Transformer architecture to enhance the transferability of UDA, named TAMS. Specifically, TAMS jointly optimizes three objectives to learn both task-specific class-level global representation and domain-specific local representation. First, we introduce adversarial masked self-distillation objective to distill representation from a full image to the representation predicted from a masked image, which aims to learn task-specific global class-level representation. Second, we introduce masked image modeling objectives to learn local pixel-level representation. Third, we introduce an adversarial weighted cross-domain adaptation objective to capture discriminative potentials of patch tokens, which aims to learn both transferable and discriminative domain-specific patch-level representation. Extensive studies on four benchmarks and the experimental results show that our proposed method can achieve remarkable improvements compared to previous state-of-the-art UDA methods.
引用
收藏
页码:6567 / 6580
页数:14
相关论文
共 50 条
  • [1] Transferable adversarial masked self-distillation for unsupervised domain adaptation
    Yuelong Xia
    Li-Jun Yun
    Chengfu Yang
    Complex & Intelligent Systems, 2023, 9 : 6567 - 6580
  • [2] Masked Self-Distillation Domain Adaptation for Hyperspectral Image Classification
    Fang, Zhuoqun
    He, Wenqiang
    Li, Zhaokui
    Du, Qian
    Chen, Qiusheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [3] Self-Distillation for Unsupervised 3D Domain Adaptation
    Cardace, Adriano
    Spezialetti, Riccardo
    Ramirez, Pierluigi Zama
    Salti, Samuele
    Di Stefano, Luigi
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4155 - 4166
  • [4] Semantic-aware for point cloud domain adaptation with self-distillation learning
    Yang, Jiming
    Da, Feipeng
    Hong, Ru
    IMAGE AND VISION COMPUTING, 2025, 154
  • [5] Transferable Feature Selection for Unsupervised Domain Adaptation
    Yan, Yuguang
    Wu, Hanrui
    Ye, Yuzhong
    Bi, Chaoyang
    Lu, Min
    Liu, Dapeng
    Wu, Qingyao
    Ng, Michael K.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (11) : 5536 - 5551
  • [6] Transferable attention networks for adversarial domain adaptation
    Zhang, Changchun
    Zhao, Qingjie
    Wang, Yu
    INFORMATION SCIENCES, 2020, 539 : 422 - 433
  • [7] Learning Transferable Parameters for Unsupervised Domain Adaptation
    Han, Zhongyi
    Sun, Haoliang
    Yin, Yilong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6424 - 6439
  • [8] Reverse Self-Distillation Overcoming the Self-Distillation Barrier
    Ni, Shuiping
    Ma, Xinliang
    Zhu, Mingfu
    Li, Xingwang
    Zhang, Yu-Dong
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2023, 4 : 195 - 205
  • [9] Adversarial Robustness for Unsupervised Domain Adaptation
    Awais, Muhammad
    Zhou, Fengwei
    Xu, Hang
    Hong, Lanqing
    Luo, Ping
    Bae, Sung-Ho
    Li, Zhenguo
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8548 - 8557
  • [10] Masked autoencoders with generalizable self-distillation for skin lesion segmentation
    Zhi, Yichen
    Bie, Hongxia
    Wang, Jiali
    Ren, Lihan
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024,