AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection

被引:7
|
作者
Gao, Yipeng [1 ,3 ]
Lin, Kun-Yu [1 ,3 ]
Yan, Junkai [1 ,3 ]
Wang, Yaowei [2 ]
Zheng, Wei-Shi [1 ,2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Pengcheng Lab, Shenzhen, Peoples R China
[3] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Beijing, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
关键词
D O I
10.1109/CVPR52729.2023.00318
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we study few-shot domain adaptive object detection (FSDAOD), where only a few target labeled images are available for training in addition to sufficient source labeled images. Critically, in FSDAOD, the data scarcity in the target domain leads to an extreme data imbalance between the source and target domains, which potentially causes over-adaptation in traditional feature alignment. To address the data imbalance problem, we propose an asymmetric adaptation paradigm, namely AsyFOD, which leverages the source and target instances from different perspectives. Specifically, by using target distribution estimation, the AsyFOD first identifies the target-similar source instances, which serves to augment the limited target instances. Then, we conduct asynchronous alignment between target-dissimilar source instances and augmented target instances, which is simple yet effective for alleviating the over-adaptation. Extensive experiments demonstrate that the proposed AsyFOD outperforms all state-of-the-art methods on four FSDAOD benchmarks with various environmental variances, e.g., 3.1% mAP improvement on Cityscapes-to-FoggyCityscapes and 2.9% mAP increase on Sim10k-to-Cityscapes. The code is available at https://github.com/Hlings/AsyFOD.
引用
收藏
页码:3261 / 3271
页数:11
相关论文
共 50 条
  • [1] Few-Shot Domain Adaptive Object Detection for Microscopic Images
    Inayat, Sumayya
    Dilawar, Nimra
    Sultani, Waqas
    Ali, Mohsen
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, 2024, 15012 : 98 - 108
  • [2] Few-Shot Object Detection Based on Global Domain Adaptation Strategy
    Gong, Xiaolin
    Cai, Youpeng
    Wang, Jian
    Liu, Daqing
    Ma, Yongtao
    NEURAL PROCESSING LETTERS, 2025, 57 (01)
  • [3] AcroFOD: An Adaptive Method for Cross-Domain Few-Shot Object Detection
    Gao, Yipeng
    Yang, Lingxiao
    Huang, Yunmu
    Xie, Song
    Li, Shiyong
    Zheng, Wei-Shi
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 673 - 690
  • [4] σ-Adaptive Decoupled Prototype for Few-Shot Object Detection
    Du, Jinhao
    Zhang, Shan
    Chen, Qiang
    Le, Haifeng
    Sun, Yanpeng
    Ni, Yao
    Wang, Jian
    He, Bin
    Wang, Jingdong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18904 - 18914
  • [5] See you somewhere in the ocean: few-shot domain adaptive underwater object detection
    Han, Lu
    Zhai, JiPing
    Yu, Zhibin
    Zheng, Bing
    FRONTIERS IN MARINE SCIENCE, 2023, 10
  • [6] Few-Shot Adversarial Domain Adaptation
    Motiian, Saeid
    Jones, Quinn
    Iranmanesh, Seyed Mehdi
    Doretto, Gianfranco
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] Few-Shot Object Detection: A Survey
    Antonelli, Simone
    Avola, Danilo
    Cinque, Luigi
    Crisostomi, Donato
    Foresti, Gian Luca
    Galasso, Fabio
    Marini, Marco Raoul
    Mecca, Alessio
    Pannone, Daniele
    ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [8] Few-Shot Object Counting and Detection
    Thanh Nguyen
    Chau Pham
    Khoi Nguyen
    Minh Hoai
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 348 - 365
  • [9] Few-Shot Video Object Detection
    Fan, Qi
    Tang, Chi-Keung
    Tai, Yu-Wing
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 76 - 98
  • [10] Few-Shot Object Detection of drones
    Zou Weibao
    Liu Xindi
    Yang Jitao
    Qu Wei
    INTERNATIONAL CONFERENCE ON ELECTRICAL, COMPUTER AND ENERGY TECHNOLOGIES (ICECET 2021), 2021, : 1030 - 1034