Multimodal feature-guided diffusion model for low-count PET image denoising

被引:0
|
作者
Lin, Gengjia [1 ]
Jin, Yuxi [2 ]
Huang, Zhenxing [2 ]
Chen, Zixiang [2 ]
Liu, Haizhou [3 ,4 ]
Zhou, Chao [5 ]
Zhang, Xu [5 ]
Fan, Wei [5 ]
Zhang, Na [2 ]
Liang, Dong [2 ]
Cao, Peng [1 ]
Hu, Zhanli [2 ]
机构
[1] Northeastern Univ, Coll Comp Sci & Engn, Shenyang 110819, Peoples R China
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Lauterbur Res Ctr Biomed Imaging, Shenzhen 518055, Peoples R China
[3] Chinese Acad Med Sci & Peking Union Med Coll, Natl Canc Ctr, Natl Clin Res Ctr Canc, Canc Hosp,Dept Radiol, Shenzhen, Peoples R China
[4] Chinese Acad Med Sci & Peking Union Med Coll, Shenzhen Hosp, Shenzhen, Peoples R China
[5] Sun Yat Sen Univ, Canc Ctr, Dept Nucl Med, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
multimodal feature-guided diffusion; physical degradation simulation; Low-count PET denoising; POSITRON-EMISSION-TOMOGRAPHY;
D O I
10.1002/mp.17764
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundTo minimize radiation exposure while obtaining high-quality Positron Emission Tomography (PET) images, various methods have been developed to derive standard-count PET (SPET) images from low-count PET (LPET) images. Although deep learning methods have enhanced LPET images, they rarely utilize the rich complementary information from MR images. Even when MR images are used, these methods typically employ early, intermediate, or late fusion strategies to merge features from different CNN streams, failing to fully exploit the complementary properties of multimodal fusion.PurposeIn this study, we introduce a novel multimodal feature-guided diffusion model, termed MFG-Diff, designed for the denoising of LPET images with the full utilization of MRI.MethodsMFG-Diff replaces random Gaussian noise with LPET images and introduces a novel degradation operator to simulate the physical degradation processes of PET imaging. Besides, it uses a novel cross-modal guided restoration network to fully exploit the modality-specific features provided by the LPET and MR images and utilizes a multimodal feature fusion module employing cross-attention mechanisms and positional encoding at multiple feature levels for better feature fusion.ResultsUnder four counts (2.5%, 5.0%, 10%, and 25%), the images generated by our proposed network showed superior performance compared to those produced by other networks in both qualitative and quantitative evaluations, as well as in statistical analysis. In particular, the peak-signal-to-noise ratio of the generated PET images improved by more than 20% under a 2.5% count, the structural similarity index improved by more than 16%, and the root mean square error reduced by nearly 50%. On the other hand, our generated PET images had significant correlation (Pearson correlation coefficient, 0.9924), consistency, and excellent quantitative evaluation results with the SPET images.ConclusionsThe proposed method outperformed existing state-of-the-art LPET denoising models and can be used to generate highly correlated and consistent SPET images obtained from LPET images.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Foreground Feature-Guided Camouflage Image Generation
    Chen, Yuelin
    An, Yuefan
    Huang, Yonsen
    Cai, Xiaodong
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2025, 16 (01) : 405 - 411
  • [22] Retinal image registration via feature-guided Gaussian mixture model
    Liu, Chengyin
    Ma, Jiayi
    Ma, Yong
    Huang, Jun
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2016, 33 (07) : 1267 - 1276
  • [23] Multimodal Feature-Guided Pretraining for RGB-T Perception
    Ouyang, Junlin
    Jin, Pengcheng
    Wang, Qingwang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 16041 - 16050
  • [24] Image colourisation using deep feature-guided image retrieval
    Chakraborty, Souradeep
    IET IMAGE PROCESSING, 2019, 13 (07) : 1130 - 1137
  • [25] Feature-guided Multimodal Sentiment Analysis towards Industry 4.0
    Yu, Bihui
    Wei, Jingxuan
    Yu, Bo
    Cai, Xingye
    Wang, Ke
    Sun, Huajun
    Bu, Liping
    Chen, Xiaowei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 100
  • [26] Feature-Guided CNN for Denoising Images From Portable Ultrasound Devices
    Dong, Guanfang
    Ma, Yingnan
    Basu, Anup
    IEEE ACCESS, 2021, 9 (09): : 28272 - 28281
  • [27] Feature-guided attention network for medical image segmentation
    Zhou, Hao
    Sun, Chaoyu
    Huang, Hai
    Fan, Mingyu
    Yang, Xu
    Zhou, Linxiao
    MEDICAL PHYSICS, 2023, 50 (08) : 4871 - 4886
  • [28] Low-count PET/MR imaging with spatial transformation
    Huang, Zhenxing
    Li, Wenbo
    Wu, Yaping
    Yang, Lin
    Liu, Ziwei
    Dong, Yun
    Yang, Yongfeng
    Zheng, Hairong
    Liang, Dong
    Wang, Meiyun
    Hu, Zhanli
    JOURNAL OF NUCLEAR MEDICINE, 2024, 65
  • [29] Feature-guided shape-based image interpolation
    Lee, TY
    Lin, CH
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2002, 21 (12) : 1479 - 1489
  • [30] Feature-Guided SAR-to-Optical Image Translation
    Zhang, Jiexin
    Zhou, Jianjiang
    Lu, Xiwen
    IEEE ACCESS, 2020, 8 (08): : 70925 - 70937