Image Denoising Using Dual Convolutional Neural Network with Skip Connection

被引:1
作者
Mengnan L [1 ]
Xianchun Zhou [2 ]
Zhiting Du [1 ]
Yuze Chen [1 ]
Binxin Tang [1 ]
机构
[1] School of Electronics Information Engineering, Nanjing University of Information Science & Technology
[2] School of Artificial Intelligence, Nanjing University of Information Science and
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; TP391.41 [];
学科分类号
080203 ;
摘要
In recent years, deep convolutional neural networks have shown superior performance in image denoising. However, deep network structures often come with a large number of model parameters, leading to high training costs and long inference times, limiting their practical application in denoising tasks. This paper proposes a new dual convolutional denoising network with skip connections(DECDNet), which achieves an ideal balance between denoising effect and network complexity. The proposed DECDNet consists of a noise estimation network, a multi-scale feature extraction network, a dual convolutional neural network, and dual attention mechanisms. The noise estimation network is used to estimate the noise level map, and the multi-scale feature extraction network is combined to improve the model's flexibility in obtaining image features. The dual convolutional neural network branch design includes convolution and dilated convolution interactive connections, with the lower branch consisting of dilated convolution layers, and both branches using skip connections. Experiments show that compared with other models, the proposed DECDNet achieves superior PSNR and SSIM values at all compared noise levels, especially at higher noise levels, showing robustness to images with higher noise levels. It also demonstrates better visual effects, maintaining a balance between denoising and detail preservation.
引用
收藏
页码:74 / 85
页数:12
相关论文
共 50 条
  • [41] Speech Enhancement using Convolutional Neural Network with Skip Connections
    Shi, Yupeng
    Rong, Weicong
    Zheng, Nengheng
    2018 11TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2018, : 6 - 10
  • [42] Denoising Convolutional Neural Network
    Xu, Qingyang
    Zhang, Chengjin
    Zhang, Li
    2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2015, : 1184 - 1187
  • [43] ADAPTIVE IMAGE DENOISING USING DEEP CONVOLUTIONAL NEURAL NETWORK FOR CARDIOVASCULAR DISEASE DIAGNOSIS
    Chen, Xiao
    Gao, Yang
    Xu, Chang
    JOURNAL OF INVESTIGATIVE MEDICINE, 2023, 71 : 31 - 31
  • [44] Optimal image Denoising using patch-based convolutional neural network architecture
    Tabassum, Shabana
    Gowre, SanjayKumar C.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) : 29805 - 29821
  • [45] Optimal image Denoising using patch-based convolutional neural network architecture
    Shabana Tabassum
    SanjayKumar C Gowre
    Multimedia Tools and Applications, 2023, 82 : 29805 - 29821
  • [46] Blind Image Quality Assessment via Deep Recursive Convolutional Network with Skip Connection
    Yan, Qingsen
    Sun, Jinqiu
    Su, Shaolin
    Zhu, Yu
    Li, Haisen
    Zhang, Yanning
    PATTERN RECOGNITION AND COMPUTER VISION, PT II, 2018, 11257 : 51 - 61
  • [47] PET Image Denoising Using Structural MRI with a Novel Dilated Convolutional Neural Network
    Serrano-Sosa, Mario
    Spuhler, Karl
    DeLorenzo, Christine
    Huang, Chuan
    JOURNAL OF NUCLEAR MEDICINE, 2020, 61
  • [48] SRNET: A Shallow Skip Connection Based Convolutional Neural Network Design for Resolving Singularities
    Robail Yasrab
    Journal of Computer Science and Technology, 2019, 34 : 924 - 938
  • [49] A Patch Based Denoising Method Using Deep Convolutional Neural Network for Seismic Image
    Zhang, Yushu
    Lin, Hongbo
    Li, Yue
    Ma, Haitao
    IEEE ACCESS, 2019, 7 : 156883 - 156894
  • [50] SenseNet: Densely Connected, Fully Convolutional Network with Bottleneck Skip Connection for Image Segmentation
    Lodhi, Bilal Ahmed
    Ullah, Rehmat
    Imran, Sajida
    Imran, Muhammad
    Kim, Byung-Seo
    IEIE Transactions on Smart Processing and Computing, 2024, 13 (04) : 328 - 336