Boosting Noise Reduction Effect via Unsupervised Fine-Tuning Strategy

被引:1
|
作者
Jiang, Xinyi [1 ]
Xu, Shaoping [1 ]
Wu, Junyun [1 ]
Zhou, Changfei [1 ]
Ji, Shuichen [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 05期
关键词
boosting denoising effect; supervised denoising models; data bias; unsupervised denoising models; flexibility; fine-tuning; IMAGE; SPARSE;
D O I
10.3390/app14051742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Over the last decade, supervised denoising models, trained on extensive datasets, have exhibited remarkable performance in image denoising, owing to their superior denoising effects. However, these models exhibit limited flexibility and manifest varying degrees of degradation in noise reduction capability when applied in practical scenarios, particularly when the noise distribution of a given noisy image deviates from that of the training images. To tackle this problem, we put forward a two-stage denoising model that is actualized by attaching an unsupervised fine-tuning phase after a supervised denoising model processes the input noisy image and secures a denoised image (regarded as a preprocessed image). More specifically, in the first stage we replace the convolution block adopted by the U-shaped network framework (utilized in the deep image prior method) with the Transformer module, and the resultant model is referred to as a U-Transformer. The U-Transformer model is trained to preprocess the input noisy images using noisy images and their labels. As for the second stage, we condense the supervised U-Transformer model into a simplified version, incorporating only one Transformer module with fewer parameters. Additionally, we shift its training mode to unsupervised training, following a similar approach as employed in the deep image prior method. This stage aims to further eliminate minor residual noise and artifacts present in the preprocessed image, resulting in clearer and more realistic output images. Experimental results illustrate that the proposed method achieves significant noise reduction in both synthetic and real images, surpassing state-of-the-art methods. This superiority stems from the supervised model's ability to rapidly process given noisy images, while the unsupervised model leverages its flexibility to generate a fine-tuned network, enhancing noise reduction capability. Moreover, with support from the supervised model providing higher-quality preprocessed images, the proposed unsupervised fine-tuning model requires fewer parameters, facilitating rapid training and convergence, resulting in overall high execution efficiency.
引用
收藏
页数:19
相关论文
共 38 条
  • [1] Boosting fine-tuning via Conditional Online Knowledge Transfer
    Liu, Zhiqiang
    Li, Yuhong
    Huang, Chengkai
    Luo, KunTing
    Liu, Yanxia
    NEURAL NETWORKS, 2024, 169 : 325 - 333
  • [2] Boosting with fine-tuning for deep image denoising
    Xie, Zhonghua
    Liu, Lingjun
    Wang, Cheng
    Chen, Zehong
    SIGNAL PROCESSING, 2024, 217
  • [3] Bagging and Boosting Fine-Tuning for Ensemble Learning
    Zhao C.
    Peng R.
    Wu D.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1728 - 1742
  • [4] Boosting generalization of fine-tuning BERT for fake news detection
    Qin, Simeng
    Zhang, Mingli
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (04)
  • [5] Unsupervised Fine-tuning of Optical Flow for Better Motion Boundary Estimation
    Alhersh, Taha
    Stuckenschmidt, Heiner
    PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2019, : 776 - 783
  • [6] Empirical analysis of the fine-tuning for Unsupervised Anomaly Detection in the ICT system
    Matsuo, Yoichi
    2023 19TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT, CNSM, 2023,
  • [7] Boosting Diagnostic Accuracy of Osteoporosis in Knee Radiograph Through Fine-Tuning CNN
    Kumar, Saumya
    Goswami, Puneet
    Batra, Shivani
    BIG DATA ANALYTICS IN ASTRONOMY, SCIENCE, AND ENGINEERING, BDA 2023, 2024, 14516 : 97 - 109
  • [8] Adaptive fine-tuning strategy for few-shot learning
    Zhuang, Xinkai
    Shao, Mingwen
    Gao, Wei
    Yang, Jianxin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [9] Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
    Bi, Wanqing
    Xv, Jianan
    Song, Mengdie
    Hao, Xiaohan
    Gao, Dayong
    Qi, Fulang
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [10] Efficient Index Learning via Model Reuse and Fine-tuning
    Liu, Guanli
    Qi, Jianzhong
    Kulik, Lars
    Soga, Kazuya
    Borovica-Gajic, Renata
    Rubinstein, Benjamin I. P.
    2023 IEEE 39TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING WORKSHOPS, ICDEW, 2023, : 60 - 66