Run and Chase: Towards Accurate Source-Free Domain Adaptive Object Detection

被引:0
|
作者
Lin, Luojun [1 ]
Yang, Zhifeng [1 ]
Liu, Qipeng [1 ]
Yu, Yuanlong [1 ]
Lin, Qifeng [1 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME | 2023年
基金
中国国家自然科学基金;
关键词
Object Detection; Transfer Learning; Unsupervised Domain Adaptation;
D O I
10.1109/ICME55011.2023.00418
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been increasing interest in the Source-Free Domain Adaptive Object Detection task, which involves training an object detector on the unlabeled target data using a pre-trained source model without accessing the source data. Most related methods are developed from the mean-teacher framework, which aims to train the student model closer to the teacher model via a pseudo labeling manner, where the teacher model is the exponential-moving-average of the student models at different time-steps. Following this line of works, we propose a Run-and-Chase Mutual-Learning method to strengthen the interactions between the student model and the teacher model in both feature and prediction levels. In our method, the student model is optimized to run away from the teacher model at the feature level, while chasing the teacher model at the prediction level. In this way, the student model is forced to be distinguishable at different time-steps, so that the teacher model can acquire more diverse task-related information and produce higher-accuracy pseudo labels. As the training goes, the student and teacher models are updated iteratively and promoted mutually, which can prevent the model collapse problem. Extensive experiments are conducted to validate the effectiveness of our method.
引用
收藏
页码:2453 / 2458
页数:6
相关论文
共 50 条
  • [41] Adversarially Robust Source-free Domain Adaptation with Relaxed Adversarial Training
    Xiao, Yao
    Wei, Pengxu
    Liu, Cong
    Lin, Liang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2681 - 2686
  • [42] Feature Alignment by Uncertainty and Self-Training for Source-Free Domain
    Lee, JoonHo
    Lee, Gyemin
    NEURAL NETWORKS, 2023, 161 : 682 - 692
  • [43] Asymmetric Source-Free Unsupervised Domain Adaptation for Medical Image Diagnosis
    Zhang, Yajie
    Huang, Zhi-An
    Wu, Jibin
    Tan, Kay Chen
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 234 - 239
  • [44] SOURCE-FREE DOMAIN ADAPTATION FOR CROSS-SCENE HYPERSPECTRAL IMAGE CLASSIFICATION
    Xu, Zun
    Wei, Wei
    Zhang, Lei
    Nie, Jiangtao
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 3576 - 3579
  • [45] Exploiting multi-level consistency learning for source-free domain adaptation
    Ouyang, Jihong
    Zhang, Zhengjie
    Meng, Qingyi
    Li, Ximing
    Chi, Jinjin
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [46] Exploring Implicit Domain-Invariant Features for Domain Adaptive Object Detection
    Lang, Qinghai
    Zhang, Lei
    Shi, Wenxu
    Chen, Weijie
    Pu, Shiliang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (04) : 1816 - 1826
  • [47] Generation, division and training: A promising method for source-free unsupervised domain adaptation
    Tian, Qing
    Zhao, Mengna
    NEURAL NETWORKS, 2024, 172
  • [48] Reliable hybrid knowledge distillation for multi-source domain adaptive object detection
    Li, Yang
    Zhang, Shanshan
    Liu, Yunan
    Yang, Jian
    KNOWLEDGE-BASED SYSTEMS, 2024, 297
  • [49] Mixed Attention Network for Source-Free Domain Adaptation in Bearing Fault Diagnosis
    Liu, Yijiao
    Yuan, Qiufan
    Sun, Kang
    Huo, Mingying
    Qi, Naiming
    IEEE ACCESS, 2024, 12 : 93771 - 93780
  • [50] Time and frequency synergy for source-free time-series domain adaptations
    Furqon, Muhammad Tanzil
    Pratama, Mahardhika
    Shiddiqi, Ary
    Liu, Lin
    Habibullah, Habibullah
    Dogancay, Kutluyil
    INFORMATION SCIENCES, 2025, 695