EBCDet: Energy-Based Curriculum for Robust Domain Adaptive Object Detection

被引:21
作者
Banitalebi-Dehkordi, Amin [1 ]
Amirkhani, Abdollah [2 ]
Mohammadinasab, Alireza [2 ]
机构
[1] Huawei Technol Canada Co Ltd, Big Data & Intelligence Platform Lab, Markham, ON L3R 5A4, Canada
[2] Iran Univ Sci & Technol, Sch Automot Engn, Tehran 1684613114, Iran
关键词
Object detection; domain adaptation; energy; model robustness; curriculum learning; NETWORK;
D O I
10.1109/ACCESS.2023.3298369
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a new method for addressing the problem of unsupervised domain adaptation for robust object detection. To this end, we propose an energy-based curriculum for progressively adapting a model, thereby mitigating the pseudo-label noise caused by domain shifts. Throughout the adaptation process, we also make use of spatial domain mixing as well as knowledge distillation to improve the pseudo-labels reliability. Our method does not require any modifications in the model architecture or any special training tricks or complications. Our end-to-end pipeline, although simple, proves effective in adapting object detector neural networks. To verify our method, we perform an extensive systematic set of experiments on: synthetic-to-real scenario, cross-camera setup, cross-domain artistic datasets, and image corruption benchmarks, and establish a new state-of-the-art in several cases. For example, compared to the best existing baselines, our Energy-Based Curriculum learning method for robust object Detection (EBCDet), achieves: 1-3 % AP50 improvement on Sim10k-to-Cityscapes and KITTI-to-Cityscapes, 3-4 % AP50 boost on Pascal-VOC-to- Comic, WaterColor, and ClipArt, and 1-5% relative robustness improvement on Pascal-C, COCO-C, and Cityscapes-C (1-2 % absolute mPC). Code is available at: https://github.com/AutomotiveML/EBCDet.
引用
收藏
页码:77810 / 77825
页数:16
相关论文
共 82 条
  • [1] Akbari M., 2021, P BRIT MACH VIS C BM, P1
  • [2] DeepCar 5.0: Vehicle Make and Model Recognition Under Challenging Conditions
    Amirkhani, Abdollah
    Barshooi, Amir Hossein
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 541 - 553
  • [3] Revisiting Batch Normalization for Improving Corruption Robustness
    Benz, Philipp
    Zhang, Chaoning
    Karjauv, Adil
    Kweon, In So
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 494 - 503
  • [4] Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
    Bousmalis, Konstantinos
    Silberman, Nathan
    Dohan, David
    Erhan, Dumitru
    Krishnan, Dilip
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 95 - 104
  • [5] Exploring Object Relation in Mean Teacher for Cross-Domain Detection
    Cai, Qi
    Pan, Yingwei
    Ngo, Chong-Wah
    Tian, Xinmei
    Duan, Lingyu
    Yao, Ting
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11449 - 11458
  • [6] Domain-Specific Batch Normalization for Unsupervised Domain Adaptation
    Chang, Woong-Gi
    You, Tackgeun
    Seo, Seonguk
    Kwak, Suha
    Han, Bohyung
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7346 - 7354
  • [7] Domain Adaptive Faster R-CNN for Object Detection in the Wild
    Chen, Yuhua
    Li, Wen
    Sakaridis, Christos
    Dai, Dengxin
    Van Gool, Luc
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3339 - 3348
  • [8] Cheng-Chun Hsu, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12354), P733, DOI 10.1007/978-3-030-58545-7_42
  • [9] Choi J, 2019, Arxiv, DOI arXiv:1908.00262
  • [10] The Cityscapes Dataset for Semantic Urban Scene Understanding
    Cordts, Marius
    Omran, Mohamed
    Ramos, Sebastian
    Rehfeld, Timo
    Enzweiler, Markus
    Benenson, Rodrigo
    Franke, Uwe
    Roth, Stefan
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3213 - 3223