Boosting Noise Reduction Effect via Unsupervised Fine-Tuning Strategy

被引:1
|
作者
Jiang, Xinyi [1 ]
Xu, Shaoping [1 ]
Wu, Junyun [1 ]
Zhou, Changfei [1 ]
Ji, Shuichen [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 05期
关键词
boosting denoising effect; supervised denoising models; data bias; unsupervised denoising models; flexibility; fine-tuning; IMAGE; SPARSE;
D O I
10.3390/app14051742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Over the last decade, supervised denoising models, trained on extensive datasets, have exhibited remarkable performance in image denoising, owing to their superior denoising effects. However, these models exhibit limited flexibility and manifest varying degrees of degradation in noise reduction capability when applied in practical scenarios, particularly when the noise distribution of a given noisy image deviates from that of the training images. To tackle this problem, we put forward a two-stage denoising model that is actualized by attaching an unsupervised fine-tuning phase after a supervised denoising model processes the input noisy image and secures a denoised image (regarded as a preprocessed image). More specifically, in the first stage we replace the convolution block adopted by the U-shaped network framework (utilized in the deep image prior method) with the Transformer module, and the resultant model is referred to as a U-Transformer. The U-Transformer model is trained to preprocess the input noisy images using noisy images and their labels. As for the second stage, we condense the supervised U-Transformer model into a simplified version, incorporating only one Transformer module with fewer parameters. Additionally, we shift its training mode to unsupervised training, following a similar approach as employed in the deep image prior method. This stage aims to further eliminate minor residual noise and artifacts present in the preprocessed image, resulting in clearer and more realistic output images. Experimental results illustrate that the proposed method achieves significant noise reduction in both synthetic and real images, surpassing state-of-the-art methods. This superiority stems from the supervised model's ability to rapidly process given noisy images, while the unsupervised model leverages its flexibility to generate a fine-tuned network, enhancing noise reduction capability. Moreover, with support from the supervised model providing higher-quality preprocessed images, the proposed unsupervised fine-tuning model requires fewer parameters, facilitating rapid training and convergence, resulting in overall high execution efficiency.
引用
收藏
页数:19
相关论文
共 38 条
  • [31] PF-BERxiT: Early exiting for BERT with parameter-efficient fine-tuning and flexible early exiting strategy
    Gao, Xiangxiang
    Liu, Yue
    Huang, Tao
    Hou, Zhongyu
    NEUROCOMPUTING, 2023, 558
  • [32] Improving Fine-tuning Pre-trained Models on Small Source Code Datasets via Variational Information Bottleneck
    Liu, Jiaxing
    Sha, Chaofeng
    Peng, Xin
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING, SANER, 2023, : 331 - 342
  • [33] Cross-Anatomy Transfer Learning via Shape-Aware Adaptive Fine-Tuning for 3D Vessel Segmentation
    Han, Tao
    Ai, Danni
    Fan, Jingfan
    Song, Hong
    Xiao, Deqiang
    Wang, Yining
    Yang, Jian
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6064 - 6077
  • [34] Fine-tuning convolutional neural network based on relaxed Bayesian-optimized support vector machine for random-valued impulse noise removal
    Lu, Xin
    Li, Fusheng
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (01)
  • [35] Pixel-level spectral aflatoxin B1 content intelligent prediction via fine-tuning large language model (LLM)
    Zhu, Hongfei
    Zhao, Yifan
    Zhao, Longgang
    Yang, Ranbing
    Han, Zhongzhi
    FOOD CONTROL, 2025, 171
  • [36] GASTRITIS DETECTION FROM GASTRIC X-RAY IMAGES VIA FINE-TUNING OF PATCH-BASED DEEP CONVOLUTIONAL NEURAL NETWORK
    Kanai, Misaki
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1371 - 1375
  • [37] Effective Adaptive Strategy Selection Using Extended Fine-Tuning and CNN-Based Surrogate Model in Repeated-Encounter Bilateral Automated Negotiation
    Chang, Shengbo
    Fujita, Katsuhide
    AGENTS AND ARTIFICIAL INTELLIGENCE, ICAART 2023, 2024, 14546 : 310 - 332
  • [38] Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon$$\end{document} via gradient magnitude histogram analysis
    Gustavo Silva
    Paul Rodriguez
    Neural Computing and Applications, 2024, 36 (35) : 22223 - 22243