Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

被引:0
作者
Kim, Daesung [1 ]
Choo, Kyobin [2 ]
Lee, Sangwon [3 ]
Kang, Seongjin [3 ]
Yun, Mijin [3 ]
Yang, Jaewon [4 ]
机构
[1] Yonsei Univ, Dept Artificial Intelligence, Seoul, South Korea
[2] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
[3] Yonsei Univ, Dept Nucl Med, Coll Med, Seoul, South Korea
[4] Univ Texas Southwestern, Dept Radiol, Dallas, TX USA
基金
新加坡国家研究基金会;
关键词
PET/CT; Amyloid; Quantification; Deep learning; Segmentation; TEMPLATE;
D O I
10.1007/s00259-025-07132-2
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MRSYN) and performing automated quantitative regional analysis using MRSYN-derived segmentation. Methods In this retrospective study, 139 subjects who underwent brain [F-18]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MRSYN; subsequently, a separate model was trained to segment MRSYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [F-18]FBB PET images using the acquired ROIs. For evaluation of MRSYN, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MRSYN-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MRSYN and ground-truth MR (MRGT). Results Compared to MRGT, the mean SSIM of MRSYN was 0.974 +/- 0.005. The MRSYN-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MRSYN and those from MRGT, excluding the precuneus. Conclusion We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MRSYN. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Validation of CARE kV automated tube voltage selection for PET-CT: PET quantification and CT radiation dose reduction in phantoms
    Natalie A. Bebbington
    Troels Jørgensen
    Erik Dupont
    Mille A. Micheelsen
    EJNMMI Physics, 8
  • [22] Lesion segmentation on 18F-fluciclovine PET/CT images using deep learning
    Wang, Tonghe
    Lei, Yang
    Schreibmann, Eduard
    Roper, Justin
    Liu, Tian
    Schuster, David M.
    Jani, Ashesh B.
    Yang, Xiaofeng
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [23] PET plus MR versus PET/CT in the initial staging of head and neck cancer, using a trimodality PET/CT plus MR system
    Sekine, Tetsuro
    Barbosa, Felipe de Galiza
    Kuhn, Felix P.
    Burger, Irene A.
    Stolzmann, Paul
    Huber, Gerhard F.
    Kollias, Spyros S.
    von Schulthess, Gustav K.
    Veit-Haibach, Patrick
    Huellner, Martin W.
    CLINICAL IMAGING, 2017, 42 : 232 - 239
  • [24] Deep learning-based outcome prediction using PET/CT and automatically predicted probability maps of primary tumor in patients with oropharyngeal cancer
    De Biase, Alessia
    Ma, Baoqiang
    Guo, Jiapan
    van Dijk, Lisanne V.
    Langendijk, Johannes A.
    Both, Stefan
    van Ooijen, Peter M. A.
    Sijtsema, Nanna M.
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 244
  • [25] Deep Learning-Based Automatic CT Quantification of Coronavirus Disease 2019 Pneumonia: An International Collaborative Study
    Yoo, Seung-Jin
    Qi, Xiaolong
    Inui, Shohei
    Kim, Hyungjin
    Jeong, Yeon Joo
    Lee, Kyung Hee
    Lee, Young Kyung
    Lee, Bae Young
    Kim, Jin Yong
    Jin, Kwang Nam
    Lim, Jae-Kwang
    Kim, Yun-Hyeon
    Kim, Ki Beom
    Jiang, Zicheng
    Shao, Chuxiao
    Lei, Junqiang
    Zou, Shengqiang
    Pan, Hongqiu
    Gu, Ye
    Zhang, Guo
    Goo, Jin Mo
    Yoon, Soon Ho
    JOURNAL OF COMPUTER ASSISTED TOMOGRAPHY, 2022, 46 (03) : 413 - 422
  • [26] Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases
    Belal, Sarah Lindgren
    Sadik, May
    Kaboteh, Reza
    Enqvist, Olof
    Ulen, Johannes
    Poulsen, Mads H.
    Simonsen, Jane
    Hoilund-Carlsen, Poul F.
    Edenbrandt, Lars
    Tragardh, Elin
    EUROPEAN JOURNAL OF RADIOLOGY, 2019, 113 : 89 - 95
  • [27] Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand?
    Margarita Kirienko
    Matteo Biroli
    Fabrizia Gelardi
    Ettore Seregni
    Arturo Chiti
    Martina Sollini
    Clinical and Translational Imaging, 2021, 9 : 37 - 55
  • [28] Deep learning-based automatic delineation of anal cancer gross tumour volume: a multimodality comparison of CT, PET and MRI
    Groendahl, Aurora Rosvoll
    Moe, Yngve Mardal
    Kaushal, Christine Kiran
    Bao Ngoc Huynh
    Rusten, Espen
    Tomic, Oliver
    Hernes, Eivor
    Hanekamp, Bettina
    Undseth, Christine
    Guren, Marianne Gronlie
    Malinen, Eirik
    Futsaether, Cecilia Marie
    ACTA ONCOLOGICA, 2022, 61 (01) : 89 - 96
  • [29] Deep learning-based attenuation correction for brain PET with various radiotracers
    Hashimoto, Fumio
    Ito, Masanori
    Ote, Kibo
    Isobe, Takashi
    Okada, Hiroyuki
    Ouchi, Yasuomi
    ANNALS OF NUCLEAR MEDICINE, 2021, 35 (06) : 691 - 701
  • [30] Eliminating CT radiation for clinical PET examination using deep learning
    Li, Qingneng
    Zhu, Xiaohua
    Zou, Sijuan
    Zhang, Na
    Liu, Xin
    Yang, Yongfeng
    Zheng, Hairong
    Liang, Dong
    Hu, Zhanli
    EUROPEAN JOURNAL OF RADIOLOGY, 2022, 154