Deep Convolutional Backbone Comparison for Automated PET Image Quality Assessment

被引:0
作者
Hopson, Jessica B. [1 ]
Flaus, Anthime [2 ]
McGinnity, Colm J. [2 ]
Neji, Radhouene [1 ,3 ]
Reader, Andrew J. [1 ]
Hammers, Alexander [2 ]
机构
[1] Kings Coll London, Dept Biomed Engn, London WC2R 2LS, England
[2] Kings Coll London, Kings Coll London & Guys & St Thomas PET Ctr, London WC2R, England
[3] Siemens Healthcare Ltd, MR Res Collaborat, Camberley GU15 3YL, England
基金
英国工程与自然科学研究理事会;
关键词
Convolutional neural networks (CNNs); deep learning; image quality; image reconstruction; transfer learning; NEURAL-NETWORKS; CLASSIFICATION; FRAMEWORK;
D O I
10.1109/TRPMS.2024.3436697
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Pretraining deep convolutional network mappings using natural images helps with medical imaging analysis tasks; this is important given the limited number of clinically annotated medical images. Many 2-D pretrained backbone networks, however, are currently available. This work compared 18 different backbones from 5 architecture groups (pretrained on ImageNet) for the task of assessing [F-18]FDG brain positron emission tomography (PET) image quality (reconstructed at seven simulated doses), based on three clinical image quality metrics (global quality rating, pattern recognition, and diagnostic confidence). Using 2-D randomly sampled patches, up to eight patients (at three dose levels each) were used for training, with three separate patient datasets used for testing. Each backbone was trained five times with the same training and validation sets, and with six cross-folds. Training only the final fully connected layer (with similar to 6000-20000 trainable parameters) achieved a test mean-absolute-error (MAE) of similar to 0.5 (which was within the intrinsic uncertainty of clinical scoring). To compare "classical" and over-parameterized regimes, the pretrained weights of the last 40% of the network layers were then unfrozen. The MAE fell below 0.5 for 14 out of the 18 backbones assessed, including two that previously failed to train. Generally, backbones with residual units (e.g., DenseNets and ResNetV2s), were suited to this task, in terms of achieving the lowest MAE at test time (similar to 0.45-0.5). This proof-of-concept study shows that over-parameterization may also be important for automated PET image quality assessments.
引用
收藏
页码:893 / 901
页数:9
相关论文
共 57 条
[1]  
Abadi M., 2015, arXiv, DOI [10.48550/arXiv.1603.04467, DOI 10.48550/ARXIV.1603.04467]
[2]   Impact of Dataset Size on Classification Performance: An Empirical Evaluation in the Medical Domain [J].
Althnian, Alhanoof ;
AlSaeed, Duaa ;
Al-Baity, Heyam ;
Samha, Amani ;
Dris, Alanoud Bin ;
Alzakari, Najla ;
Abou Elwafa, Afnan ;
Kurdi, Heba .
APPLIED SCIENCES-BASEL, 2021, 11 (02) :1-18
[3]  
Ashburner J., 2013, SPM8 Manual
[4]   Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation [J].
Belkin, Mikhail .
ACTA NUMERICA, 2021, 30 :203-248
[5]   True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation [J].
Chen, Kevin T. ;
Toueg, Tyler N. ;
Koran, Mary Ellen Irene ;
Davidzon, Guido ;
Zeineh, Michael ;
Holley, Dawn ;
Gandhi, Harsh ;
Halbert, Kim ;
Boumis, Athanasia ;
Kennedy, Gabriel ;
Mormino, Elizabeth ;
Khalighi, Mehdi ;
Zaharchuk, Greg .
EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2021, 48 (08) :2416-2425
[6]  
Chollet F, 2015, KERAS
[7]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[8]   Single-Modality Supervised Joint PET-MR Image Reconstruction [J].
Corda-D'Incan, Guillaume ;
Schnabel, Julia A. ;
Hammers, Alexander ;
Reader, Andrew J. .
IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES, 2023, 7 (07) :742-754
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, 10.48550/arXiv.1704.04861]