Haze Relevant Feature Attention Network for Single Image Dehazing

被引:8
作者
Jiang, Xin [1 ,2 ]
Lu, Lu [1 ]
Zhu, Ming [1 ,2 ]
Hao, Zhicheng [1 ]
Gao, Wen [1 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun 130033, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
关键词
Atmospheric modeling; Image color analysis; Generators; Scattering; Generative adversarial networks; Feature extraction; Propagation losses; Single image dehazing; cycle generative adversarial networks; haze relevant feature; attention module; dense block; REMOVAL; VISIBILITY;
D O I
10.1109/ACCESS.2021.3100604
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Single image dehazing methods based on deep learning technique have made great achievements in recent years. However, some methods recover haze-free images by estimating the so-called transmission map and global atmospheric light, which are strictly limited to the simplified atmospheric scattering model and do not give full play to the advantages of deep learning to fit complex functions. Other methods require pairs of training data, whereas in practice pairs of hazy and corresponding haze-free images are difficult to obtain. To address these problems, inspired by cycle generative adversarial model, we have developed an end-to-end haze relevant feature attention network for single image dehazing, which does not require paired training images. Specifically, we make explicit use of haze relevant feature by embedding an attention module into a novel dehazing generator that combines an encoder-decoder structure with dense blocks. The constructed network adopts a novel strategy which derives attention maps from several hand-designed priors, such as dark channel, color attenuation, maximum contrast and so on. Since haze is usually unevenly distributed across an image, the attention maps could serve as a guidance of the amount of haze at image pixels. Meanwhile, dense blocks can maximize information flow along features from different levels. Furthermore, color loss is proposed to avoid color distortion and generate visually better haze-free images. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods.
引用
收藏
页码:106476 / 106488
页数:13
相关论文
共 32 条
[21]   Attentive Generative Adversarial Network for Raindrop Removal from A Single Image [J].
Qian, Rui ;
Tan, Robby T. ;
Yang, Wenhan ;
Su, Jiajun ;
Liu, Jiaying .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2482-2491
[22]  
Qin X, 2020, AAAI CONF ARTIF INTE, V34, P11908
[23]   Enhanced Pix2pix Dehazing Network [J].
Qu, Yanyun ;
Chen, Yizi ;
Huang, Jingying ;
Xie, Yuan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8152-8160
[24]   Visibility in bad weather from a single image [J].
Tan, Robby T. .
2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, :2347-2354
[25]   Investigating Haze-relevant Features in A Learning Framework for Image Dehazing [J].
Tang, Ketan ;
Yang, Jianchao ;
Wang, Jue .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :2995-3002
[26]  
Wang W., 2020, IEEE ACCESS, V8
[27]   DerainCycleGAN: Rain Attentive CycleGAN for Single Image Deraining and Rainmaking [J].
Wei, Yanyan ;
Zhang, Zhao ;
Wang, Yang ;
Xu, Mingliang ;
Yang, Yi ;
Yan, Shuicheng ;
Wang, Meng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :4788-4801
[28]   Image Haze Removal Algorithm Based on Nonsubsampled Contourlet Transform [J].
Zhang, Bowen ;
Wang, Manli ;
Shen, Xiaobo .
IEEE ACCESS, 2021, 9 :21708-21720
[29]   Multi-scale Single Image Dehazing using Perceptual Pyramid Deep Network [J].
Zhang, He ;
Sindagi, Vishwanath ;
Patel, Vishal M. .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :1015-1024
[30]   Pyramid Scene Parsing Network [J].
Zhao, Hengshuang ;
Shi, Jianping ;
Qi, Xiaojuan ;
Wang, Xiaogang ;
Jia, Jiaya .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6230-6239