Learning feature fusion via an interpretation method for tumor segmentation on PET/CT

被引:5
作者
Kang, Susu [1 ,5 ]
Chen, Zhiyuan [1 ]
Li, Laquan [2 ]
Lu, Wei [3 ]
Qi, X. Sharon [4 ]
Tan, Shan [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Key Lab Image Informat Proc & Intelligent Control, Minist Educ China, 1037 Luo Yu Rd, Wuhan 430074, Hubei, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Sci, 2 Chong Wen Rd, Chongqing 400065, Peoples R China
[3] Mem Sloan Kettering Canc Ctr, Dept Med Phys, 1275 York Ave, New York, NY 10065 USA
[4] Univ Calif Los Angeles, Sch Med, Dept Radiat Oncol, 10833 Le Conte Ave, Los Angeles, CA 90001 USA
[5] Huazhong Univ Sci & Technol, Innovat Inst, 1037 Luo Yu Rd, Wuhan 430074, Hubei, Peoples R China
关键词
Tumor segmentation; PET/CT images; Feature fusion; Network interpretation; Deep learning; IMAGES;
D O I
10.1016/j.asoc.2023.110825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate tumor segmentation of multi-modality PET/CT images plays a vital role in computer-aided cancer diagnosis and treatment. It is crucial to rationally fuse the complementary information in multi-modality PET/CT segmentation. However, existing methods usually lack interpretability and fail to sufficiently identify and aggregate critical information from different modalities. In this study, we proposed a novel segmentation framework that incorporated an interpretation module into the multi-modality segmentation backbone. The interpretation module highlighted critical features from each modality based on their contributions to the segmentation performance. To provide explicit supervision for the interpretation module, we introduced a novel interpretation loss with two fusion schemes: strengthened fusion and perturbed fusion. The interpretation loss guided the interpretation module to focus on informative features, enhancing its effectiveness in generating meaningful interpretable masks. Under the guidance of the interpretation module, the proposed approach can fully exploit meaningful features from each modality, leading to better integration of multi-modality information and improved segmentation performance. Ablative and comparative experiments were conducted on two PET/CT tumor segmentation datasets. The proposed approach surpassed the baseline by 1.4 and 1.8 Dices on two datasets, respectively, indicating the improvement achieved by the interpretation method. Furthermore, the proposed approach outperformed the best comparison approach by 0.9 and 0.6 Dices on two datasets, respectively. In addition, visualization and perturbation experiments further illustrated the effectiveness of the interpretation method in highlighting critical features.
引用
收藏
页数:12
相关论文
共 43 条
[1]   VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images [J].
Chen, Hao ;
Dou, Qi ;
Yu, Lequan ;
Qin, Jing ;
Heng, Pheng-Ann .
NEUROIMAGE, 2018, 170 :446-455
[2]  
Dabkowski P, 2017, ADV NEUR IN, V30
[3]   A New Method for Volume Segmentation of PET Images, Based on Possibility Theory [J].
Dewalle-Vignion, Anne-Sophie ;
Betrouni, Nacim ;
Lopes, Renaud ;
Huglo, Damien ;
Stute, Simon ;
Vermandel, Maximilien .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2011, 30 (02) :409-423
[4]   EFNet: evidence fusion network for tumor segmentation from PET-CT volumes [J].
Diao, Zhaoshuo ;
Jiang, Huiyan ;
Han, Xian-Hua ;
Yao, Yu-Dong ;
Shi, Tianyu .
PHYSICS IN MEDICINE AND BIOLOGY, 2021, 66 (20)
[5]   Concurrent multimodality image segmentation by active contours for radiotherapy treatment planning [J].
El Naqa, Issam ;
Yang, Deshan ;
Apte, Aditya ;
Khullar, Divya ;
Mutic, Sasa ;
Zheng, Jie ;
Bradley, Jeffrey D. ;
Grigsby, Perry ;
Deasy, Joseph O. .
MEDICAL PHYSICS, 2007, 34 (12) :4738-4749
[6]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[7]   Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation [J].
Fu, Xiaohang ;
Bi, Lei ;
Kumar, Ashnil ;
Fulham, Michael ;
Kim, Jinman .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (09) :3507-3516
[8]   A who e-body FDG-PET/CT Dataset with manually annotated Tumor Lesions [J].
Gatidis, Sergios ;
Hepp, Tobias ;
Frueh, Marcel ;
La Fougere, Christian ;
Nikolaou, Konstantin ;
Pfannenberg, Christina ;
Scholkopf, Bernhard ;
Kustner, Thomas ;
Cyran, Clemens ;
Rubin, Daniel .
SCIENTIFIC DATA, 2022, 9 (01)
[9]  
Han DF, 2011, LECT NOTES COMPUT SC, V6801, P245, DOI 10.1007/978-3-642-22092-0_21
[10]   A Fuzzy Locally Adaptive Bayesian Segmentation Approach for Volume Determination in PET [J].
Hatt, Mathieu ;
le Rest, Catherine Cheze ;
Turzo, Alexandre ;
Roux, Christian ;
Visvikis, Dimitris .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2009, 28 (06) :881-893