Domain Adaptive Object Detection with Dehazing Module

被引:0
作者
Pan, Gang [1 ]
Liu, Kang [1 ]
Li, Jingxin [1 ]
Zhang, Rufei [2 ]
Shen, Sheng [2 ]
Zeng, Zhiliang [2 ]
Wang, Jiahao [1 ]
Sun, Di [3 ]
机构
[1] Tianjin Univ, 135 Yaguan Rd, Tianjin 300350, Peoples R China
[2] Beijing Inst Control & Elect Technol, 51 Jia, Beijing 100032, Peoples R China
[3] Tianjin Univ Sci & Technol, 1038 Dagunanlu Rd, Tianjin 300222, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XI, ICIC 2024 | 2024年 / 14872卷
关键词
Foggy Object Detection; Domain Adaptation; Dehazing Module;
D O I
10.1007/978-981-97-5612-4_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In foggy object detection tasks, the presence of airborne particles reduces imaging clarity, resulting in a significant decrease in detection accuracy. Existing fog removal networks lack evaluation metrics for high-level tasks, and connecting the fog removal network limits the adaptability of the object detection network. To address these issues, this paper proposes training the fog removal network with a perceptual-loss approach involving the object detection network. This approach aims to enhance the accuracy of the fog removal network in advanced tasks and overcome the constraints of quantitative evaluation indexes like PSNR. We compare the results of training DefogNet with perceptual loss and pixel-level loss, and obtain the best results in terms of PSNR and SSM indices using both losses. Although the object detection network connected to the dehazing network can handle detection task in foggy scenes, its accuracy decreases in such scenarios. For this reason, we propose the DefogDA-FasterRCNN network, which incorporates domain adaptation into the integrated network, and makes the object detection module domain-adaptive for both foggy and non-foggy domains that pass through the dehazing module. Foggy images will obtain clearer features through the fog removal network and the negative impact of foggy images through the fog removal network will be weakened by the domain adaptation.
引用
收藏
页码:74 / 83
页数:10
相关论文
共 25 条
  • [1] DehazeNet: An End-to-End System for Single Image Haze Removal
    Cai, Bolun
    Xu, Xiangmin
    Jia, Kui
    Qing, Chunmei
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (11) : 5187 - 5198
  • [2] Domain Adaptive Faster R-CNN for Object Detection in the Wild
    Chen, Yuhua
    Li, Wen
    Sakaridis, Christos
    Dai, Dengxin
    Van Gool, Luc
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3339 - 3348
  • [3] The Cityscapes Dataset for Semantic Urban Scene Understanding
    Cordts, Marius
    Omran, Mohamed
    Ramos, Sebastian
    Rehfeld, Timo
    Enzweiler, Markus
    Benenson, Rodrigo
    Franke, Uwe
    Roth, Stefan
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3213 - 3223
  • [4] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [5] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [6] Region-Based Convolutional Networks for Accurate Object Detection and Segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (01) : 142 - 158
  • [7] Joseph RK, 2016, CRIT POL ECON S ASIA, P1
  • [8] A Robust Learning Approach to Domain Adaptive Object Detection
    Khodabandeh, Mehran
    Vahdat, Arash
    Ranjbar, Mani
    Macready, William G.
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 480 - 490
  • [9] Li BY, 2017, Arxiv, DOI arXiv:1707.06543
  • [10] Benchmarking Single-Image Dehazing and Beyond
    Li, Boyi
    Ren, Wenqi
    Fu, Dengpan
    Tao, Dacheng
    Feng, Dan
    Zeng, Wenjun
    Wang, Zhangyang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (01) : 492 - 505