Adaptive Dehazing YOLO for Object Detection

被引:5
|
作者
Zhang, Kaiwen [1 ]
Yan, Xuefeng [1 ,2 ]
Wang, Yongzhen [1 ]
Qi, Junchen [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Nanjing, Peoples R China
[2] Collaborat Innovat Ctr Novel Software Technol & I, Nanjing, Peoples R China
[3] North China Elect Power Univ, Baoding, Peoples R China
关键词
Object detection; Image restoration; Adverse weather;
D O I
10.1007/978-3-031-44195-0_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While CNN-based object detection methods operate smoothly in normal images, they produce poor detection results under adverse weather conditions due to image degradation. To address this issue, we propose a novel Adaptive Dehazing YOLO (DH-YOLO) frame-work to reduce the impact of weather information on the detection tasks. DH-YOLO is a multi-task learning paradigm that jointly optimizes object detection and image restoration tasks in an end-to-end fashion. In the image restoration module, the feature extraction network serves as an encoder, and a Feature Filtering Module (FFM) is used to remove redundant features. The FFM contains an Adaptive Dehazing Module for image recovery, whose parameters are quickly calculated using a lightweight Cascaded Partial Decoder. This allows the framework to make use of weather-invariant information in hazy images to extract haze-free features. By sharing three feature layers at different scales between the two subtasks, the performance of the object detection network is improved by the use of clear features. DH-YOLO is based on YOLOv4 and forms a unified, end-to-end model with the above modules. Experimental results show that our method outperforms many advanced detection methods on real-world foggy datasets, demonstrating its effectiveness in object detection under adverse weather conditions.
引用
收藏
页码:14 / 27
页数:14
相关论文
共 50 条
  • [31] S-YOLO: A small object detection network based on improved YOLO
    Sun, Yanpeng
    Wang, Chenlu
    Qu, Lele
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 224 - 224
  • [32] In-out YOLO glass: Indoor-outdoor object detection using adaptive spatial pooling squeeze and attention YOLO network
    Gladis, K. P. Ajitha
    Madavarapu, Jhansi Bharathi
    Kumar, R. Raja
    Sugashini, T.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 91
  • [33] ORO-YOLO: An Improved YOLO Algorithm for On-Road Object Detection
    Lian, Zheng
    Nie, Yiming
    Kong, Fanjie
    Dai, Bin
    PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 3653 - 3664
  • [34] YOLO-G: Improved YOLO for cross-domain object detection
    Wei, Jian
    Wang, Qinzhao
    Zhao, Zixu
    PLOS ONE, 2023, 18 (09):
  • [35] YOLO-CIR: The network based on YOLO and ConvNeXt for infrared object detection
    Zhou, Jinjie
    Zhang, Baohui
    Yuan, Xilin
    Lian, Cheng
    Ji, Li
    Zhang, Qian
    Yue, Jiang
    INFRARED PHYSICS & TECHNOLOGY, 2023, 131
  • [36] AMFT-YOLO: A Adaptive Multi-scale YOLO Algorithm with Multi-level Feature Fusion for Object Detection in UAV Scenes
    Wang, Tiebiao
    Cui, Zhenchao
    Li, Xiaoyang
    MULTIMEDIA MODELING, MMM 2025, PT I, 2025, 15520 : 72 - 85
  • [37] YOLO-ELWNet: A lightweight object detection network
    Song, Baoye
    Chen, Jianyu
    Liu, Weibo
    Fang, Jingzhong
    Xue, Yani
    Liu, Xiaohui
    NEUROCOMPUTING, 2025, 636
  • [38] An Object Detection System Based on YOLO in Traffic Scene
    Tao, Jing
    Wang, Hongbo
    Zhang, Xinyu
    Li, Xiaoyu
    Yang, Huawei
    PROCEEDINGS OF 2017 6TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2017), 2017, : 315 - 319
  • [39] Understanding of Object Detection Based on CNN Family and YOLO
    Du, Juan
    2ND INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2018), 2018, 1004
  • [40] BUILDING ENVELOPE OBJECT DETECTION USING YOLO MODELS
    Bayomi, Norhan
    El Kholy, Mohanned
    Fernandez, John E.
    Velipasalar, Senem
    Rakha, Tarek
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 617 - 630