Single-Image Dehazing via Compositional Adversarial Network

被引:31
作者
Zhu, Hongyuan [1 ,2 ]
Cheng, Yi [1 ]
Peng, Xi [3 ]
Zhou, Joey Tianyi [4 ]
Kang, Zhao [5 ]
Lu, Shijian [6 ]
Fang, Zhiwen [7 ,8 ]
Li, Liyuan [1 ]
Lim, Joo-Hwee [1 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore, Singapore
[2] ASTAR, A AI, CHEEM Program, Singapore, Singapore
[3] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Peoples R China
[4] ASTAR, Inst High Performance Comp, Singapore, Singapore
[5] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[6] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[7] Southern Med Univ, Sch Biomed Engn, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[8] Hunan Univ Humanities Sci & Technol, Sch Energy & Mech Elect Engn, Loudi 417000, Peoples R China
基金
中国国家自然科学基金;
关键词
Image color analysis; Estimation; Image enhancement; image processing;
D O I
10.1109/TCYB.2019.2955092
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Single-image dehazing has been an important topic given the commonly occurred image degradation caused by adverse atmosphere aerosols. The key to haze removal relies on an accurate estimation of global air-light and the transmission map. Most existing methods estimate these two parameters using separate pipelines which reduces the efficiency and accumulates errors, thus leading to a suboptimal approximation, hurting the model interpretability, and degrading the performance. To address these issues, this article introduces a novel generative adversarial network (GAN) for single-image dehazing. The network consists of a novel compositional generator and a novel deeply supervised discriminator. The compositional generator is a densely connected network, which combines fine-scale and coarse-scale information. Benefiting from the new generator, our method can directly learn the physical parameters from data and recover clean images from hazy ones in an end-to-end manner. The proposed discriminator is deeply supervised, which enforces that the output of the generator to look similar to the clean images from low-level details to high-level structures. To the best of our knowledge, this is the first end-to-end generative adversarial model for image dehazing, which simultaneously outputs clean images, transmission maps, and air-lights. Extensive experiments show that our method remarkably outperforms the state-of-the-art methods. Furthermore, to facilitate future research, we create the HazeCOCO dataset which is currently the largest dataset for single-image dehazing.
引用
收藏
页码:829 / 838
页数:10
相关论文
共 45 条
[1]  
[Anonymous], 2012, P ACCV
[2]  
[Anonymous], IEEE T AFFECT COMPUT
[3]   DehazeNet: An End-to-End System for Single Image Haze Removal [J].
Cai, Bolun ;
Xu, Xiangmin ;
Jia, Kui ;
Qing, Chunmei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (11) :5187-5198
[4]   Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation [J].
Dong, Weisheng ;
Fu, Fazuo ;
Shi, Guangming ;
Cao, Xun ;
Wu, Jinjian ;
Li, Guangyu ;
Li, Xin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (05) :2337-2352
[5]   Haze editing with natural transmission [J].
Fan, Xin ;
Wang, Yi ;
Gao, Renjie ;
Luo, Zhongxuan .
VISUAL COMPUTER, 2016, 32 (01) :137-147
[6]   Dehazing Using Color-Lines [J].
Fattal, Raanan .
ACM TRANSACTIONS ON GRAPHICS, 2014, 34 (01)
[7]   Single image dehazing [J].
Fattal, Raanan .
ACM TRANSACTIONS ON GRAPHICS, 2008, 27 (03)
[8]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[9]   Single Image Haze Removal Using Dark Channel Prior [J].
He, Kaiming ;
Sun, Jian ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (12) :2341-2353
[10]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269