NTSDCN: New Three-Stage Deep Convolutional Image Demosaicking Network

被引:9
|
作者
Wang, Yan [1 ,2 ]
Yin, Shiying [1 ]
Zhu, Shuyuan [1 ]
Ma, Zhan [1 ,3 ]
Xiong, Ruiqin [4 ]
Zeng, Bing [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[2] Hikvis Res Inst, Hangzhou 310051, Peoples R China
[3] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210023, Peoples R China
[4] Peking Univ, Inst Digital Media, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
Image reconstruction; Laplace equations; Feature extraction; Image color analysis; Convolution; Interpolation; Energy loss; Demosaicking; convolutional neural network; prior information; residual; features; COLOR INTERPOLATION;
D O I
10.1109/TCSVT.2020.3040082
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this letter, we compose a new three-stage deep convolutional neural network (NTSDCN) for image demosaicking, and it consists of our proposed Laplacian energy-constrained local residual unit (LC-LRU) and a feature-guided prior fusion unit (FG-PFU). Specifically, the LC-LRU is used to refine the learning target of the specific residual blocks in the network and enhance the dominant information of the residual features. The FG-PFU is designed to guide the feature extraction of the red (R) and blue (B) channels by utilizing prior information from the reconstructed green (G) channel. In our proposed NTSDCN, we recover the G channel image in the first stage with the CFA image and reconstruct the R and B images in the second stage. Finally, we fine-tune the resulting R, G and B images in the third stage to compose a full-color RGB image. The experimental results show that our proposed method achieves better performance than the state-of-the-art methods. The code is available at https://github.com/wyannn/NTSDCN.
引用
收藏
页码:3725 / 3729
页数:5
相关论文
共 50 条
  • [1] Image Inpainting with a Three-Stage Generative Network
    Shao X.
    Ye H.
    Yang B.
    Cao F.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (12): : 1047 - 1063
  • [2] COLOR IMAGE DEMOSAICKING USING A 3-STAGE CONVOLUTIONAL NEURAL NETWORK STRUCTURE
    Cui, Kai
    Jin, Zhi
    Steinbach, Eckehard
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2177 - 2181
  • [3] ISFRNet: A Deep Three-stage Identity and Structure Feature Refinement Network for Facial Image Inpainting
    Wang, Yan
    Shin, Jitae
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2023, 17 (03): : 881 - 895
  • [4] Three-Stage Interpolation Method for Demosaicking Monochrome Polarization DoFP Images
    Liu, Luping
    Li, Xin
    Yang, Jianmin
    Tian, Xinliang
    Liu, Lei
    SENSORS, 2024, 24 (10)
  • [5] Image Demosaicking Using Densely Connected Convolutional Neural Network
    Park, Bumjun
    Jeong, Jechang
    2018 14TH INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGY & INTERNET BASED SYSTEMS (SITIS), 2018, : 304 - 307
  • [6] A New Three-stage Curriculum Learning Approach for Deep Network Based Liver Tumor Segmentation
    Li, Huiyu
    Liu, Xiabi
    Boumaraf, Said
    Liu, Weihua
    Gong, Xiaopeng
    Ma, Xiaohong
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Deep Image Demosaicking Using a Cascade of Convolutional Residual Denoising Networks
    Kokkinos, Filippos
    Lefkimmiatis, Stamatios
    COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 317 - 333
  • [8] Three-stage network for age estimation
    Yu Tingting
    Wang Junqian
    Wu Lintai
    Xu Yong
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2019, 4 (02) : 122 - 126
  • [9] JOINT DEMOSAICKING AND BLIND DEBLURRING USING DEEP CONVOLUTIONAL NEURAL NETWORK
    Chi, Zhixiang
    Shu, Xiao
    Wu, Xiaolin
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2169 - 2173
  • [10] DeepDemosaicking: Adaptive Image Demosaicking via Multiple Deep Fully Convolutional Networks
    Tan, Daniel Stanley
    Chen, Wei-Yang
    Hua, Kai-Lung
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (05) : 2408 - 2419