Enhanced Risk Stratification of Gastrointestinal Stromal Tumors Through Cross-Modality Synthesis from CT to [18F]-FDG PET Images

被引:0
作者
Huang, Kun [1 ]
Gao, Mengmeng [2 ]
Antonecchia, Emanuele [3 ,4 ]
Zhang, Li [5 ,6 ]
Zhou, Ziling [2 ]
Zou, Xianghui [1 ]
Li, Zhen [2 ]
Cao, Wei [5 ,6 ]
Liu, Yuqing [7 ]
D'Ascenzo, Nicola [1 ]
机构
[1] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230026, Peoples R China
[2] Huazhong Univ Sci & Technol, Tongji Hosp, Tongji Med Coll, Dept Radiol, Wuhan 430030, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Life Sci & Technol, Wuhan 430074, Peoples R China
[4] Ist Neurol Mediterraneo, Dept Innovat Engn & Phys, IRCCS, I-56127 Pozzilli, Italy
[5] Tongji Med Coll, Union Hosp, Dept Nucl Med, Wuhan 430022, Peoples R China
[6] Huazhong Univ Sci & Technol, Hubei Key Lab Mol Imaging, Wuhan 430022, Peoples R China
[7] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230094, Peoples R China
关键词
Tumors; Computed tomography; Transformers; Feature extraction; Medical diagnostic imaging; Generators; Positron emission tomography; Plasmas; Training; Image reconstruction; Computed tomography (CT); gastrointestinal stromal tumor (GIST); generative adversarial network (GAN); PET; risk stratification; Transformer;
D O I
10.1109/TRPMS.2024.3514779
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Risk stratification algorithms for gastrointestinal stromal tumors (GISTs) are mainly based on computed tomography (CT) data. Though [F-18]-fluorodeoxyglucose positron emission tomography ([F-18]-FDG PET) imaging may improve their performance, challenges in image interpretation in the gastrointestinal tract still limit the widespread integration of PET into routine clinical protocols, causing a poor availability of PET data to develop and train stratification models. To solve this issue, we propose to enrich existing [F-18]-FDG PET GIST datasets with pseudo-images generated with a novel conditional PET generative adversarial network (CPGAN), which employs a weighted fusion of CT images and tumor masks, embedding also clinical data. As for GIST assessment, we propose the transformer-based multimodal network for GIST risk stratification (TMGRS), which is trained on the enriched dataset and exploits the properties of transformers to process simultaneously PET and CT images. The training and validation of the models were conducted on a multicenter dataset comprising 208 patients. In comparison with the existing stratification methods, CPGAN-synthesized PET images show a peak signal-to-noise ratio increased on average by 18% and improve risk stratification, which achieves a remarkable accuracy of 0.937 when TMGRS network is used. Results underscore the potential of CPGAN network in providing more reliable GIST predictions.
引用
收藏
页码:487 / 496
页数:10
相关论文
共 34 条
  • [1] Alexey D, 2020, arXiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
  • [2] A hybrid feature pyramid network and Efficient Net-B0-based GIST detection and segmentation from fused CT-PET image
    Allapakam, Venu
    Karuna, Yepuganti
    [J]. SOFT COMPUTING, 2023, 27 (16) : 11877 - 11893
  • [3] Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection
    Ben-Cohen, Avi
    Klang, Eyal
    Raskin, Stephen P.
    Soffer, Shelly
    Ben-Haim, Simona
    Konen, Eli
    Amitai, Michal Marianne
    Greenspan, Hayit
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 78 : 186 - 194
  • [4] An interpretable AI model for recurrence prediction after surgery in gastrointestinal stromal tumour: an observational cohort study
    Bertsimas, Dimitris
    Margonis, Georgios Antonios
    Tang, Seehanah
    Koulouras, Angelos
    Antonescu, Cristina R.
    Brennan, Murray F.
    Martin-Broto, Javier
    Rutkowski, Piotr
    Stasinos, Georgios
    Wang, Jane
    Pikoulis, Emmanouil
    Bylina, Elzbieta
    Sobczuk, Pawel
    Gutierrez, Antonio
    Jadeja, Bhumika
    Tap, William D.
    Chi, Ping
    Singer, Samuel
    [J]. ECLINICALMEDICINE, 2023, 64
  • [5] Optimal classification trees
    Bertsimas, Dimitris
    Dunn, Jack
    [J]. MACHINE LEARNING, 2017, 106 (07) : 1039 - 1082
  • [6] GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond
    Cao, Yue
    Xu, Jiarui
    Lin, Stephen
    Wei, Fangyun
    Hu, Han
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1971 - 1980
  • [7] Chartsias Agisilaos, 2017, Simulation and Synthesis in Medical Imaging. Second International Workshop, SASHIMI 2017. Held in Conjunction with MICCAI 2017. Proceedings: LNCS 10557, P3, DOI 10.1007/978-3-319-68127-6_1
  • [8] Chen J, 2021, arXiv, DOI DOI 10.48550/ARXIV.2102.04306
  • [9] ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis
    Dalmaz, Onat
    Yurt, Mahmut
    Cukur, Tolga
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (10) : 2598 - 2614
  • [10] Multimodality imaging in the diagnosis, risk stratification, and management of patients with dilated cardiomyopathies: an expert consensus document from the European Association of Cardiovascular Imaging
    Donal, Erwan
    Delgado, Victoria
    Bucciarelli-Ducci, Chiara
    Gallil, Elena
    Haugaa, Kristina H.
    Charron, Philippe
    Voigt, Jens-Uwe
    Cardim, Nuno
    Masci, P. G.
    Galderisi, Maurizio
    Gaemperli, Oliver
    Gimelli, Alessia
    Pinto, Yigal M.
    Lancellotti, Patrizio
    Habib, Gilbert
    Elliott, Perry
    Edvardsen, Thor
    Cosyns, Bernard
    Popescu, Bogdan A.
    [J]. EUROPEAN HEART JOURNAL-CARDIOVASCULAR IMAGING, 2019, 20 (10) : 1075 - 1093