Graph Convolutional Networks Based Multi-modal Data Integration for Breast Cancer Survival Prediction

被引:2
|
作者
Hu, Hongbin [1 ]
Liang, Wenbin [2 ]
Zou, Xitao [3 ]
Zou, Xianchun [1 ]
机构
[1] Southwest Univ, Coll Comp & Informat Sci, Chongqing 400715, Peoples R China
[2] Southwest Univ, Coll Chem & Chem Engn, Key Lab Luminescence Anal & Mol Sensing, Minist Educ, Chongqing 400715, Peoples R China
[3] Chongqing Univ Sci & Technol, Sch Intelligent Technol & Engn, Chongqing 401331, Peoples R China
关键词
Breast Cancer; Survival Prediction; Graph Convolutional Networks;
D O I
10.1007/978-981-97-5689-6_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, multi-modal breast cancer survival prediction (MBCSP) has been widely researched and made huge progress. However, most existing MBCSP methods usually overlook the structural information among patients. While certain studies may address structural information, they often ignore the abundant semantic information within multi-modal data, despite its significant impact on the efficacy of cancer survival prediction. Herein, we propose a novel method for breast cancer survival prediction, termed graph convolutional networks based multi-modal data integration for breast cancer survival prediction (GMBS). In essence, GMBS firstly defines a series multi-modal fusion module to integrate diverse patient data modalities, yielding robust initial embeddings. Subsequently, GMBS introduces a patient-patient graph construction module, aiming to delineate inter-patient relationships effectively. Lastly, GMBS incorporates a Graph Convolutional Network framework to harness the intricate structural information encoded within the constructed graph. Extensive experiments on two well-known MBCSP datasets demonstrate the superior performance of GMBS method compared to representative baseline methods.
引用
收藏
页码:85 / 98
页数:14
相关论文
共 50 条
  • [31] Ad Creative Discontinuation Prediction with Multi-Modal Multi-Task Neural Survival Networks
    Kitada, Shunsuke
    Iyatomi, Hitoshi
    Seki, Yoshifumi
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [32] Multi-graph fusion based graph convolutional networks for traffic prediction
    Hu, Na
    Zhang, Dafang
    Xie, Kun
    Liang, Wei
    Li, Kuanching
    Zomaya, Albert
    COMPUTER COMMUNICATIONS, 2023, 210 : 194 - 204
  • [33] RDF-Based Data Integration for Multi-modal Transport Systems
    Braghin, Stefano
    Lopes, Nuno
    Gkoufas, Yiannis
    Verago, Rudi
    Botea, Adi
    Berlingerio, Michele
    Bicer, Veli
    SMART CITY 360, 2016, 166 : 922 - 923
  • [34] Multi-modal data integration of dosiomics, radiomics, deep features, and clinical data for radiation-induced lung damage prediction in breast cancer patients
    Li, Yan
    Jiang, Jun
    Li, Xuyi
    Zhang, Mei
    JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES, 2025, 18 (02)
  • [35] Learning Confidence Measures by Multi-modal Convolutional Neural Networks
    Fu, Zehua
    Ardabilian Fard, Mohsen
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1321 - 1330
  • [36] Multi-modal Information Extraction and Fusion with Convolutional Neural Networks
    Kumar, Dinesh
    Sharma, Dharmendra
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [37] Multi-modal page stream segmentation with convolutional neural networks
    Wiedemann, Gregor
    Heyer, Gerhard
    LANGUAGE RESOURCES AND EVALUATION, 2021, 55 (01) : 127 - 150
  • [38] Multi-Modal Reflection Removal Using Convolutional Neural Networks
    Sun, Jun
    Chang, Yakun
    Jung, Cheolkon
    Feng, Jiawei
    IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (07) : 1011 - 1015
  • [39] Multi-modal page stream segmentation with convolutional neural networks
    Gregor Wiedemann
    Gerhard Heyer
    Language Resources and Evaluation, 2021, 55 : 127 - 150
  • [40] Multi-Modal Depth Estimation Using Convolutional Neural Networks
    Siddiqui, Sadique Adnan
    Vierling, Axel
    Berns, Karsten
    2020 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR 2020), 2020, : 354 - 359