Learning feature fusion via an interpretation method for tumor segmentation on PET/CT

被引:5
作者
Kang, Susu [1 ,5 ]
Chen, Zhiyuan [1 ]
Li, Laquan [2 ]
Lu, Wei [3 ]
Qi, X. Sharon [4 ]
Tan, Shan [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Key Lab Image Informat Proc & Intelligent Control, Minist Educ China, 1037 Luo Yu Rd, Wuhan 430074, Hubei, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Sci, 2 Chong Wen Rd, Chongqing 400065, Peoples R China
[3] Mem Sloan Kettering Canc Ctr, Dept Med Phys, 1275 York Ave, New York, NY 10065 USA
[4] Univ Calif Los Angeles, Sch Med, Dept Radiat Oncol, 10833 Le Conte Ave, Los Angeles, CA 90001 USA
[5] Huazhong Univ Sci & Technol, Innovat Inst, 1037 Luo Yu Rd, Wuhan 430074, Hubei, Peoples R China
关键词
Tumor segmentation; PET/CT images; Feature fusion; Network interpretation; Deep learning; IMAGES;
D O I
10.1016/j.asoc.2023.110825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate tumor segmentation of multi-modality PET/CT images plays a vital role in computer-aided cancer diagnosis and treatment. It is crucial to rationally fuse the complementary information in multi-modality PET/CT segmentation. However, existing methods usually lack interpretability and fail to sufficiently identify and aggregate critical information from different modalities. In this study, we proposed a novel segmentation framework that incorporated an interpretation module into the multi-modality segmentation backbone. The interpretation module highlighted critical features from each modality based on their contributions to the segmentation performance. To provide explicit supervision for the interpretation module, we introduced a novel interpretation loss with two fusion schemes: strengthened fusion and perturbed fusion. The interpretation loss guided the interpretation module to focus on informative features, enhancing its effectiveness in generating meaningful interpretable masks. Under the guidance of the interpretation module, the proposed approach can fully exploit meaningful features from each modality, leading to better integration of multi-modality information and improved segmentation performance. Ablative and comparative experiments were conducted on two PET/CT tumor segmentation datasets. The proposed approach surpassed the baseline by 1.4 and 1.8 Dices on two datasets, respectively, indicating the improvement achieved by the interpretation method. Furthermore, the proposed approach outperformed the best comparison approach by 0.9 and 0.6 Dices on two datasets, respectively. In addition, visualization and perturbation experiments further illustrated the effectiveness of the interpretation method in highlighting critical features.
引用
收藏
页数:12
相关论文
共 43 条
[11]   Medical image segmentation method based on multi-feature interaction and fusion over cloud computing [J].
He, Xianyu ;
Qi, Guanqiu ;
Zhu, Zhiqin ;
Li, Yuanyuan ;
Cong, Baisen ;
Bai, Litao .
SIMULATION MODELLING PRACTICE AND THEORY, 2023, 126
[12]  
Hoyer L, 2019, ADV NEUR IN, V32
[13]   Evidence Fusion with Contextual Discounting for Multi-modality Medical Image Segmentation [J].
Huang, Ling ;
Denoeux, Thierry ;
Vera, Pierre ;
Ruan, Su .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V, 2022, 13435 :401-411
[14]   Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images [J].
Ju, Wei ;
Xiang, Deihui ;
Zhang, Bin ;
Wang, Lirong ;
Kopriva, Ivica ;
Chen, Xinjian .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5854-5867
[15]  
Kingma DP., 2014, ARXIV, DOI DOI 10.48550/ARXIV.1412.6980
[16]   U-NOISE: LEARNABLE NOISE MASKS FOR INTERPRETABLE IMAGE SEGMENTATION [J].
Koker, Teddy ;
Mireshghallah, Fatemehsadat ;
Titcombe, Tom ;
Kaissis, Georgios .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :394-398
[17]   Co-Learning Feature Fusion Maps From PET-CT Images of Lung Cancer [J].
Kumar, Ashnil ;
Fulham, Michael ;
Feng, Dagan ;
Kim, Jinman .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (01) :204-217
[18]   Fusion of multi-tracer PET images for dose painting [J].
Lelandais, Benoit ;
Ruan, Su ;
Denoeux, Thierry ;
Vera, Pierre ;
Gardin, Isabelle .
MEDICAL IMAGE ANALYSIS, 2014, 18 (07) :1247-1259
[19]  
Lenis Dimitrios, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12261), P315, DOI 10.1007/978-3-030-59710-8_31
[20]   Tell Me Where to Look: Guided Attention Inference Network [J].
Li, Kunpeng ;
Wu, Ziyan ;
Peng, Kuan-Chuan ;
Ernst, Jan ;
Fu, Yun .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9215-9223