Joint Image and Feature Enhancement for Object Detection under Adverse Weather Conditions

被引:0
作者
Yin, Mengyu [1 ]
Ling, Mingyang [2 ]
Chang, Kan [1 ,3 ]
Yuan, Zijian [1 ]
Qin, Qingpao [1 ]
Chen, Boning [4 ]
机构
[1] Guangxi Univ, Sch Comp & Elect Informat, Nanning, Peoples R China
[2] Guangxi Univ, Sch Elect Engn, Nanning, Peoples R China
[3] Guangxi Univ, Guangxi Key Lab Multimedia Commun & Network Techn, Nanning, Peoples R China
[4] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic, Australia
来源
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Object Detection; Image Enhancement; Feature Enhancement; Adverse Weather Conditions; FUSION NETWORK;
D O I
10.1109/IJCNN60899.2024.10650989
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Object detection under adverse weather conditions remains a challenging problem to date. To address this problem, a joint image and feature enhancement method called JE-YOLO is proposed. Firstly, a lightweight image enhancement network is used to enhance the low-quality image captured under adverse weather conditions. Secondly, to provide rich information for detection, two detection backbones are applied in parallel to extract features from both the low-quality image and its enhanced result. Afterwards, the extracted features are further enhanced by a foreground-guided feature refinement module (FFRM), which introduces a task-driven attention mechanism and explores inter-layer correlation. Finally, the enhanced features from different branches are fused by the adaptive multi-branch weighting (AMW) strategy, and then fed to the neck and head of detector. Experiments are carried out on both the low-light and foggy conditions, and the results demonstrate that compared with stateof-the-art (SOTA) methods, the proposed JE-YOLO is able to achieve the highest accuracy of detection in all cases. Code will be available at https://github.com/Murray-Yin/JE-YOLO.
引用
收藏
页数:8
相关论文
共 50 条
[1]  
[Anonymous], 2022, BRIT MACH VIS C BMVC, DOI DOI 10.1109/ECTC51906.2022.00046
[2]  
[Anonymous], 2019, ARXIV, DOI DOI 10.1109/CVPR.2019.00511
[3]   DehazeNet: An End-to-End System for Single Image Haze Removal [J].
Cai, Bolun ;
Xu, Xiangmin ;
Jia, Kui ;
Qing, Chunmei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (11) :5187-5198
[4]   A Two-Stage Convolutional Neural Network for Joint Demosaicking and Super-Resolution [J].
Chang, Kan ;
Li, Hengxin ;
Tan, Yufei ;
Ding, Pak Lun Kevin ;
Li, Baoxin .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) :4238-4254
[5]   Domain Adaptive Faster R-CNN for Object Detection in the Wild [J].
Chen, Yuhua ;
Li, Wen ;
Sakaridis, Christos ;
Dai, Dengxin ;
Van Gool, Luc .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3339-3348
[6]   Multitask AET with Orthogonal Tangent Regularity for Dark Object Detection [J].
Cui, Ziteng ;
Qi, Guo-Jun ;
Gu, Lin ;
You, Shaodi ;
Zhang, Zenghui ;
Harada, Tatsuya .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2533-2542
[7]   CenterNet: Keypoint Triplets for Object Detection [J].
Duan, Kaiwen ;
Bai, Song ;
Xie, Lingxi ;
Qi, Honggang ;
Huang, Qingming ;
Tian, Qi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6568-6577
[8]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[9]   Image Dehazing Transformer with Transmission-Aware 3D Position Embedding [J].
Guo, Chunle ;
Yan, Qixin ;
Anwar, Saeed ;
Cong, Runmin ;
Ren, Wenqi ;
Li, Chongyi .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5802-5810
[10]   Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [J].
Guo, Chunle ;
Li, Chongyi ;
Guo, Jichang ;
Loy, Chen Change ;
Hou, Junhui ;
Kwong, Sam ;
Cong, Runmin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1777-1786