Multi-task learning with self-learning weight for image denoising

被引:0
|
作者
Xiang, Qian [1 ]
Tang, Yong [2 ]
Zhou, Xiangyang [1 ]
机构
[1] College of Information Science and Engineering, Wuchang Shouyi University, Wuhan
[2] School of Artificial Intelligence, Hubei Business College, Wuhan
来源
Journal of Engineering and Applied Science | 2024年 / 71卷 / 01期
关键词
Convolutional neural network; Image denoising; Multi-objective optimization; Multi-task learning; Non-Gaussian noise model; Self-learning weight;
D O I
10.1186/s44147-024-00425-7
中图分类号
学科分类号
摘要
Background: Image denoising technology removes noise from the corrupted image by utilizing different features between image and noise. Convolutional neural network (CNN)-based algorithms have been the concern of the recent progress on diverse image restoration problems and become an efficient solution in image denoising. Objective: Although a quite number of existing CNN-based image denoising methods perform well on the simplified additive white Gaussian noise (AWGN) model, their performance often degrades severely on the real-world noisy images which are corrupted by more complicated noise. Methods: In this paper, we utilized the multi-task learning (MTL) framework to integrate multiple loss functions for collaborative training of CNN. This approach aims to improve the denoising performance of CNNs on real-world images with non-Gaussian noise. Simultaneously, to automatically optimize the weights of individual sub-tasks within the MTL framework, we incorporated a self-learning weight layer into the CNN. Results: Extensive experiments demonstrate that our approach effectively enhances the denoising performance of CNN-based image denoising algorithms on real-world images. It reduces excessive image smoothing, improves quantitative metrics, and enhances visual quality in the restored images. Conclusion: Our method shows the effectiveness of the improved performance of denoising CNNS for real-world image denoising processing. © The Author(s) 2024.
引用
收藏
相关论文
共 50 条
  • [41] A Comparison of Multi-task Learning and Single-Task Learning Approaches
    Marquet, Thomas
    Oswald, Elisabeth
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2023 SATELLITE WORKSHOPS, ADSC 2023, AIBLOCK 2023, AIHWS 2023, AIOTS 2023, CIMSS 2023, CLOUD S&P 2023, SCI 2023, SECMT 2023, SIMLA 2023, 2023, 13907 : 121 - 138
  • [42] Fitting and sharing multi-task learning
    Piao, Chengkai
    Wei, Jinmao
    APPLIED INTELLIGENCE, 2024, 54 (9-10) : 6918 - 6929
  • [43] Convex multi-task feature learning
    Andreas Argyriou
    Theodoros Evgeniou
    Massimiliano Pontil
    Machine Learning, 2008, 73 : 243 - 272
  • [44] Manifold Regularized Multi-Task Learning
    Yang, Peipei
    Zhang, Xu-Yao
    Huang, Kaizhu
    Liu, Cheng-Lin
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 528 - 536
  • [45] Multi-task learning with deformable convolution
    Li, Jie
    Huang, Lei
    Wei, Zhiqiang
    Zhang, Wenfeng
    Qin, Qibing
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 77
  • [46] Multi-task learning for gland segmentation
    Rezazadeh, Iman
    Duygulu, Pinar
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (01) : 1 - 9
  • [47] A brief review on multi-task learning
    Kim-Han Thung
    Chong-Yaw Wee
    Multimedia Tools and Applications, 2018, 77 : 29705 - 29725
  • [48] Multi-task Learning for Recommender Systems
    Ning, Xia
    Karypis, George
    PROCEEDINGS OF 2ND ASIAN CONFERENCE ON MACHINE LEARNING (ACML2010), 2010, 13 : 269 - 284
  • [49] Stock Ranking with Multi-Task Learning
    Ma, Tao
    Tan, Ying
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 199
  • [50] High-Magnification Super-Resolution Reconstruction of Image with Multi-Task Learning
    Li, Yanghui
    Zhu, Hong
    Yu, Shunyuan
    ELECTRONICS, 2022, 11 (09)