Multi-task learning with self-learning weight for image denoising

被引:0
|
作者
Xiang, Qian [1 ]
Tang, Yong [2 ]
Zhou, Xiangyang [1 ]
机构
[1] College of Information Science and Engineering, Wuchang Shouyi University, Wuhan
[2] School of Artificial Intelligence, Hubei Business College, Wuhan
来源
Journal of Engineering and Applied Science | 2024年 / 71卷 / 01期
关键词
Convolutional neural network; Image denoising; Multi-objective optimization; Multi-task learning; Non-Gaussian noise model; Self-learning weight;
D O I
10.1186/s44147-024-00425-7
中图分类号
学科分类号
摘要
Background: Image denoising technology removes noise from the corrupted image by utilizing different features between image and noise. Convolutional neural network (CNN)-based algorithms have been the concern of the recent progress on diverse image restoration problems and become an efficient solution in image denoising. Objective: Although a quite number of existing CNN-based image denoising methods perform well on the simplified additive white Gaussian noise (AWGN) model, their performance often degrades severely on the real-world noisy images which are corrupted by more complicated noise. Methods: In this paper, we utilized the multi-task learning (MTL) framework to integrate multiple loss functions for collaborative training of CNN. This approach aims to improve the denoising performance of CNNs on real-world images with non-Gaussian noise. Simultaneously, to automatically optimize the weights of individual sub-tasks within the MTL framework, we incorporated a self-learning weight layer into the CNN. Results: Extensive experiments demonstrate that our approach effectively enhances the denoising performance of CNN-based image denoising algorithms on real-world images. It reduces excessive image smoothing, improves quantitative metrics, and enhances visual quality in the restored images. Conclusion: Our method shows the effectiveness of the improved performance of denoising CNNS for real-world image denoising processing. © The Author(s) 2024.
引用
收藏
相关论文
共 50 条
  • [31] PERSONALITY DRIVEN MULTI-TASK LEARNING FOR IMAGE AESTHETIC ASSESSMENT
    Li, Leida
    Zhu, Hancheng
    Zhao, Sicheng
    Ding, Guiguang
    Jiang, Hongyan
    Tan, Allen
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 430 - 435
  • [32] Image Recognition of Chinese herbal pieces Based on Multi-task Learning Model
    Hu, Ji-Li
    Wang, Yong-Kang
    Che, Zeng-Yang
    Li, Qian-Qian
    Jiang, Hong-Kun
    Liu, Ling-Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 1555 - 1559
  • [33] FSPMTL: Flexible Self-Paced Multi-Task Learning
    Sun, Lijian
    Zhou, Yun
    IEEE ACCESS, 2020, 8 : 132012 - 132020
  • [34] Multi-task Semantic Matching with Self-supervised Learning
    Chen Y.
    Qiu X.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2022, 58 (01): : 83 - 90
  • [35] Towards generalizable and robust image tampering localization with multi-task learning and contrastive learning
    Li, Haodong
    Zhuang, Peiyu
    Su, Yang
    Huang, Jiwu
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 270
  • [36] Task Variance Regularized Multi-Task Learning
    Mao, Yuren
    Wang, Zekai
    Liu, Weiwei
    Lin, Xuemin
    Hu, Wenbin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 8615 - 8629
  • [37] Learning Gait Representations with Noisy Multi-Task Learning
    Cosma, Adrian
    Radoi, Emilian
    SENSORS, 2022, 22 (18)
  • [38] Learning with Partially Shared Features for Multi-Task Learning
    Liu, Cheng
    Cao, Wen-Ming
    Zheng, Chu-Tao
    Wong, Hau-San
    NEURAL INFORMATION PROCESSING, ICONIP 2017, PT V, 2017, 10638 : 95 - 104
  • [39] Multi-task manifold learning for partial label learning
    Xiao, Yanshan
    Zhao, Liang
    Wen, Kairun
    Liu, Bo
    Kong, Xiangjun
    INFORMATION SCIENCES, 2022, 602 : 351 - 365
  • [40] Multi-Task Multi-Sample Learning
    Aytar, Yusuf
    Zisserman, Andrew
    COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 : 78 - 91