Physical-model guided self-distillation network for single image dehazing

被引:4
|
作者
Lan, Yunwei [1 ]
Cui, Zhigao [1 ]
Su, Yanzhao [1 ]
Wang, Nian [1 ]
Li, Aihua [1 ]
Han, Deshuai [1 ]
机构
[1] Xian Res Inst High Technol, Xian, Peoples R China
来源
FRONTIERS IN NEUROROBOTICS | 2022年 / 16卷
基金
中国国家自然科学基金;
关键词
image dehazing; knowledge distillation; attention mechanism; deep learning; computer vision; QUALITY ASSESSMENT;
D O I
10.3389/fnbot.2022.1036465
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
MotivationImage dehazing, as a key prerequisite of high-level computer vision tasks, has gained extensive attention in recent years. Traditional model-based methods acquire dehazed images via the atmospheric scattering model, which dehazed favorably but often causes artifacts due to the error of parameter estimation. By contrast, recent model-free methods directly restore dehazed images by building an end-to-end network, which achieves better color fidelity. To improve the dehazing effect, we combine the complementary merits of these two categories and propose a physical-model guided self-distillation network for single image dehazing named PMGSDN. Proposed methodFirst, we propose a novel attention guided feature extraction block (AGFEB) and build a deep feature extraction network by it. Second, we propose three early-exit branches and embed the dark channel prior information to the network to merge the merits of model-based methods and model-free methods, and then we adopt self-distillation to transfer the features from the deeper layers (perform as teacher) to shallow early-exit branches (perform as student) to improve the dehazing effect. ResultsFor I-HAZE and O-HAZE datasets, better than the other methods, the proposed method achieves the best values of PSNR and SSIM being 17.41dB, 0.813, 18.48dB, and 0.802. Moreover, for real-world images, the proposed method also obtains high quality dehazed results. ConclusionExperimental results on both synthetic and real-world images demonstrate that the proposed PMGSDN can effectively dehaze images, resulting in dehazed results with clear textures and good color fidelity.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Mutual learning for domain adaptation: Self-distillation image dehazing network with sample-cycle
    Chen, Erkang
    Tong, Lihan
    Ye, Tian
    Chen, Sixiang
    Zhang, Yunchen
    Liu, Yun
    DISPLAYS, 2025, 87
  • [2] A Self-distillation Lightweight Image Classification Network Scheme
    Ni S.
    Ma X.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (06): : 66 - 71
  • [3] Online knowledge distillation network for single image dehazing
    Yunwei Lan
    Zhigao Cui
    Yanzhao Su
    Nian Wang
    Aihua Li
    Wei Zhang
    Qinghui Li
    Xiao Zhong
    Scientific Reports, 12
  • [4] Online knowledge distillation network for single image dehazing
    Lan, Yunwei
    Cui, Zhigao
    Su, Yanzhao
    Wang, Nian
    Li, Aihua
    Zhang, Wei
    Li, Qinghui
    Zhong, Xiao
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [5] Physical model and image translation fused network for single-image dehazing
    Su, Yan Zhao
    He, Chuan
    Cui, Zhi Gao
    Li, Ai Hua
    Wang, Nian
    PATTERN RECOGNITION, 2023, 142
  • [6] PSD-ELGAN: A pseudo self-distillation based CycleGAN with enhanced local adversarial interaction for single image dehazing
    Wu, Kangle
    Huang, Jun
    Ma, Yong
    Fan, Fan
    Ma, Jiayi
    NEURAL NETWORKS, 2024, 180
  • [7] SSKDN: a semisupervised knowledge distillation network for single image dehazing
    Lan, Yunwei
    Cui, Zhigao
    Su, Yanzhao
    Wang, Nian
    Li, Aihua
    Li, Qinghui
    Zhong, Xiao
    Zhang, Cong
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (01)
  • [8] Tolerant Self-Distillation for image classification
    Liu, Mushui
    Yu, Yunlong
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    NEURAL NETWORKS, 2024, 174
  • [9] Self-distillation with model averaging
    Gu, Xiaozhe
    Zhang, Zixun
    Jin, Ran
    Goh, Rick Siow Mong
    Luo, Tao
    INFORMATION SCIENCES, 2025, 694
  • [10] Image classification based on self-distillation
    Yuting Li
    Linbo Qing
    Xiaohai He
    Honggang Chen
    Qiang Liu
    Applied Intelligence, 2023, 53 : 9396 - 9408