SID-Net: single image dehazing network using adversarial and contrastive learning

被引:12
|
作者
Yi, Weichao [1 ]
Dong, Liquan [1 ,2 ]
Liu, Ming [1 ,2 ]
Hui, Mei [1 ]
Kong, Lingqin [1 ,2 ]
Zhao, Yuejin [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Opt & Photon, Beijing Key Lab Precis Optoelect Measurement Inst, Beijing 100081, Peoples R China
[2] Yangtze Delta Reg Acad, Beijing Inst Technol, Jiaxing 314019, Peoples R China
关键词
Image dehazing; Convolutional neural networks; Adversarial learning; Contrastive learning;
D O I
10.1007/s11042-024-18502-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image dehazing is a fundamental low-level vision task and has gained increasing attention in the computer community. Most existing learning-based methods achieve haze removal by designing different convolutional neural networks. However, these algorithms only consider clean images as optimization targets and fail to utilize negative information from hazy images, which leads to a sub-optimal dehazing performance. Towards this issue, we propose a novel single image dehazing network (SID-Net), and it consists of three core branches: Image Dehazing Branch (IDB), Adversarial Guidance Branch (AGB) and Contrastive Enhancement Branch (CEB). Specifically, IDB achieves an initial hazy-clean translation based on the encoder-decoder framework and enhances its feature representation ability by introducing an Attentive Recurrent Module (ARM) and Attention Fusion Operation (AFO), respectively. Next, AGB takes full advantage of positive information from clean ground truth by an adversarial learning strategy and guides the restored image to be closer to the haze-free domain. Finally, CEB is proposed to exploit the negative information of hazy images and further improve its dehazing performance via a contrastive learning strategy. Extensive experiments on both synthetic and real-world datasets demonstrate that our SID-Net can obtain comparable results with other state-of-the-art algorithms. Code is available at https://github.com/leandepk/SID-Net-for-image-dehazing.
引用
收藏
页码:71619 / 71638
页数:20
相关论文
共 50 条
  • [31] LFR-Net: Local feature residual network for single image dehazing
    Xiao, Xinjie
    Li, Zhiwei
    Ning, Wenle
    Zhang, Nannan
    Teng, Xudong
    ARRAY, 2023, 17
  • [32] GANID: a novel generative adversarial network for image dehazing
    Manu, Chippy M.
    Sreeni, K. G.
    VISUAL COMPUTER, 2023, 39 (09): : 3923 - 3936
  • [33] AED-Net: A Single Image Dehazing
    Hovhannisyan, Sargis A.
    Gasparyan, Hayk A.
    Agaian, Sos S.
    Ghazaryan, Art
    IEEE ACCESS, 2022, 10 : 12465 - 12474
  • [34] GANID: a novel generative adversarial network for image dehazing
    Manu, Chippy M.
    Sreeni, K. G.
    VISUAL COMPUTER, 2022,
  • [35] GANID: a novel generative adversarial network for image dehazing
    Chippy M. Manu
    K. G. Sreeni
    The Visual Computer, 2023, 39 : 3923 - 3936
  • [36] FFA-Net: Feature Fusion Attention Network for Single Image Dehazing
    Qin, Xu
    Wang, Zhilin
    Bai, Yuanchao
    Xie, Xiaodong
    Jia, Huizhu
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11908 - 11915
  • [37] Aethra-net: Single image and video dehazing using autoencoder
    Juneja, Akshay
    Kumar, Vijay
    Singla, Sunil Kumar
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 94
  • [38] Single Image Dehazing Based on Generative Adversarial Networks
    Wu, Mengyun
    Li, Bo
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, ICIC 2022, PT II, 2022, 13394 : 460 - 469
  • [39] Fusion of Heterogeneous Adversarial Networks for Single Image Dehazing
    Park, Jaihyun
    Han, David K.
    Ko, Hanseok
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4721 - 4732
  • [40] DHGAN: Generative adversarial network with dark channel prior for single-image dehazing
    Wu, Wenxia
    Zhu, Jinxiu
    Su, Xin
    Zhang, Xuewu
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (18):