LoMAE: Simple Streamlined Low-Level Masked Autoencoders for Robust, Generalized, and Interpretable Low-Dose CT Denoising

被引:1
|
作者
Wang, Dayang [1 ]
Han, Shuo [1 ]
Xu, Yongshun [1 ]
Wu, Zhan [2 ,3 ]
Zhou, Li [1 ]
Morovati, Bahareh [1 ]
Yu, Hengyong [1 ]
机构
[1] Univ Massachusetts Lowell, Dept Elect & Comp Engn, Lowell, MA 01854 USA
[2] Southeast Univ, Lab Image Sci & Technol, Nanjing 210096, Peoples R China
[3] Southeast Univ, Key Lab Comp Network & Informat Integrat, Minist Educ, Nanjing 210096, Peoples R China
关键词
Noise reduction; Noise; Transformers; Computed tomography; Decoding; Robustness; Data models; Low-dose CT; masked autoencoder; self-pretraining; transformer; RECONSTRUCTION; ALGORITHMS; NETWORK;
D O I
10.1109/JBHI.2024.3454979
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Low-dose computed tomography (LDCT) offers reduced X-ray radiation exposure but at the cost of compromised image quality, characterized by increased noise and artifacts. Recently, transformer models emerged as a promising avenue to enhance LDCT image quality. However, the success of such models relies on a large amount of paired noisy and clean images, which are often scarce in clinical settings. In computer vision and natural language processing, masked autoencoders (MAE) have been recognized as a powerful self-pretraining method for transformers, due to their exceptional capability to extract representative features. However, the original pretraining and fine-tuning design fails to work in low-level vision tasks like denoising. In response to this challenge, we redesign the classical encoder-decoder learning model and facilitate a simple yet effective streamlined low-level vision MAE, referred to as LoMAE, tailored to address the LDCT denoising problem. Moreover, we introduce an MAE-GradCAM method to shed light on the latent learning mechanisms of the MAE/LoMAE. Additionally, we explore the LoMAE's robustness and generability across a variety of noise levels. Experimental findings show that the proposed LoMAE enhances the denoising capabilities of the transformer and substantially reduce their dependency on high-quality, ground-truth data. It also demonstrates remarkable robustness and generalizability over a spectrum of noise levels. In summary, the proposed LoMAE provides promising solutions to the major issues in LDCT including interpretability, ground truth data dependency, and model robustness/generalizability.
引用
收藏
页码:6815 / 6827
页数:13
相关论文
共 50 条
  • [11] Quadratic Autoencoder for Low-Dose CT Denoising
    Fan, Fenglei
    Shan, Hongming
    Wang, Ge
    15TH INTERNATIONAL MEETING ON FULLY THREE-DIMENSIONAL IMAGE RECONSTRUCTION IN RADIOLOGY AND NUCLEAR MEDICINE, 2019, 11072
  • [12] CDAF-Net: A Contextual Contrast Detail Attention Feature Fusion Network for Low-Dose CT Denoising
    Ma, Yaoyao
    Wang, Jing
    Xu, Chao
    Huang, Yuling
    Chu, Minghang
    Fan, Zhiwei
    Xu, Yishen
    Wu, Di
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (03) : 2048 - 2060
  • [13] Zero-Shot Low-Dose CT Image Denoising via Patch-Based Content-Guided Diffusion Models
    Su, Bo
    Hu, Xiangyun
    Zha, Yunfei
    Wu, Zijun
    Ma, Yuncheng
    Xu, Jiabo
    Zhang, Baochang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [14] GDAFormer: Gradient-guided Dual Attention Transformer for Low-Dose CT image denoising
    Jiang, Guowei
    Luo, Ting
    Xu, Haiyong
    Nie, Sheng
    Song, Yang
    He, Zhouyan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [15] Transformer With Double Enhancement for Low-Dose CT Denoising
    Li, Haoran
    Yang, Xiaomin
    Yang, Sihan
    Wang, Daoyong
    Jeon, Gwanggil
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (10) : 4660 - 4671
  • [16] Benchmarking deep learning-based low-dose CT image denoising algorithms
    Eulig, Elias
    Ommer, Bjorn
    Kachelriess, Marc
    MEDICAL PHYSICS, 2024, 51 (12) : 8776 - 8788
  • [17] Hformer: highly efficient vision transformer for low-dose CT denoising
    Zhang, Shi-Yu
    Wang, Zhao-Xuan
    Yang, Hai-Bo
    Chen, Yi-Lun
    Li, Yang
    Pan, Quan
    Wang, Hong-Kai
    Zhao, Cheng-Xin
    NUCLEAR SCIENCE AND TECHNIQUES, 2023, 34 (04)
  • [18] A Novel Total Variation Model for Low-Dose CT Image Denoising
    Chen, Wenbin
    Shao, Yanling
    Wang, Yanling
    Zhang, Quan
    Liu, Yi
    Yao, Linhong
    Chen, Yan
    Yang, Guanru
    Gui, Zhiguo
    IEEE ACCESS, 2018, 6 : 78892 - 78903
  • [19] A multi-attention Uformer for low-dose CT image denoising
    Huimin Yan
    Chenyun Fang
    Zhiwei Qiao
    Signal, Image and Video Processing, 2024, 18 : 1429 - 1442
  • [20] A multi-attention Uformer for low-dose CT image denoising
    Yan, Huimin
    Fang, Chenyun
    Qiao, Zhiwei
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1429 - 1442