Prediction of brain tumor recurrence location based on multi-modal fusion and nonlinear correlation learning

被引:6
|
作者
Zhou, Tongxue [1 ]
Noeuveglise, Alexandra [2 ]
Modzelewski, Romain [2 ]
Ghazouani, Fethi [3 ]
Thureau, Sebastien [2 ]
Fontanilles, Maxime [2 ]
Ruan, Su [3 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
[2] Henri Becquerel Canc Ctr, Dept Nucl Med, F-76038 Rouen, France
[3] Univ Rouen Normandie, LITIS QuantIF, F-76183 Rouen, France
基金
中国国家自然科学基金;
关键词
Brain tumor recurrence; Location prediction; Multi-modal fusion; Correlation learning; Deep learning; SEGMENTATION; DIAGNOSIS; NETWORKS; INVASION; MRI;
D O I
10.1016/j.compmedimag.2023.102218
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Brain tumor is one of the leading causes of cancer death. The high-grade brain tumors are easier to recurrent even after standard treatment. Therefore, developing a method to predict brain tumor recurrence location plays an important role in the treatment planning and it can potentially prolong patient's survival time. There is still little work to deal with this issue. In this paper, we present a deep learning-based brain tumor recurrence location prediction network. Since the dataset is usually small, we propose to use transfer learning to improve the prediction. We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021. Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features. Following that, a multi-scale multi-channel feature fusion model and a nonlinear correlation learning module are developed to learn the effective features. The correlation between multi-channel features is modeled by a nonlinear equation. To measure the similarity between the distributions of original features of one modality and the estimated correlated features of another modality, we propose to use Kullback-Leibler divergence. Based on this divergence, a correlation loss function is designed to maximize the similarity between the two feature distributions. Finally, two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location. To the best of our knowledge, this is the first work that can segment the present tumor and at the same time predict future tumor recurrence location, making the treatment planning more efficient and precise. The experimental results demonstrated the effectiveness of our proposed method to predict the brain tumor recurrence location from the limited dataset.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Human activity recognition based on multi-modal fusion
    Cheng Zhang
    Tianqi Zu
    Yibin Hou
    Jian He
    Shengqi Yang
    Ruihai Dong
    CCF Transactions on Pervasive Computing and Interaction, 2023, 5 : 321 - 332
  • [42] Deep multi-modal fusion network with gated unit for breast cancer survival prediction
    Yuan, Han
    Xu, Hongzhen
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING, 2024, 27 (07) : 883 - 896
  • [43] Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
    Xue Feng
    Kanchan Ghimire
    Daniel D. Kim
    Rajat S. Chandra
    Helen Zhang
    Jian Peng
    Binghong Han
    Gaofeng Huang
    Quan Chen
    Sohil Patel
    Chetan Bettagowda
    Haris I. Sair
    Craig Jones
    Zhicheng Jiao
    Li Yang
    Harrison Bai
    Journal of Digital Imaging, 2023, 36 (5) : 2075 - 2087
  • [44] Human activity recognition based on multi-modal fusion
    Zhang, Cheng
    Zu, Tianqi
    Hou, Yibin
    He, Jian
    Yang, Shengqi
    Dong, Ruihai
    CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (03) : 321 - 332
  • [45] Brain Tumor Segmentation for Multi-Modal MRI with Missing Information
    Feng, Xue
    Ghimire, Kanchan
    Kim, Daniel D.
    Chandra, Rajat S.
    Zhang, Helen
    Peng, Jian
    Han, Binghong
    Huang, Gaofeng
    Chen, Quan
    Patel, Sohil
    Bettagowda, Chetan
    Sair, Haris I.
    Jones, Craig
    Jiao, Zhicheng
    Yang, Li
    Bai, Harrison
    JOURNAL OF DIGITAL IMAGING, 2023, 36 (05) : 2075 - 2087
  • [46] Large-scale crop dataset and deep learning-based multi-modal fusion framework for more accurate GxE genomic prediction
    Zou, Qixiang
    Tai, Shuaishuai
    Yuan, Qianguang
    Nie, Yating
    Gou, Heping
    Wang, Longfei
    Li, Chuanxiu
    Jing, Yi
    Dong, Fangchun
    Yue, Zhen
    Rong, Yi
    Fang, Xiaodong
    Xiong, Shengwu
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2025, 230
  • [47] Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation
    Zhou, Tongxue
    Zhu, Shan
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 163
  • [48] Adaptively multi-modal contrastive fusion network for molecular properties prediction
    Tang, Wenyan
    Li, Meng
    Zhan, Yi
    Chen, Bin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 152
  • [49] CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
    Zheng, Junze
    Xiao, Junyan
    Wang, Yaowei
    Zhang, Xuming
    SENSORS, 2024, 24 (11)
  • [50] Multi-modal fusion for business process prediction in call center scenarios
    Cheng, Long
    Du, Li
    Liu, Cong
    Hu, Yang
    Fang, Fang
    Ward, Tomas
    INFORMATION FUSION, 2024, 108