Predicting the Survival of Cancer Patients With Multimodal Graph Neural Network

被引:26
作者
Gao, Jianliang [1 ]
Lyu, Tengfei [1 ]
Xiong, Fan [1 ]
Wang, Jianxin [1 ]
Ke, Weimao [2 ]
Li, Zhao [3 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China
[2] Drexel Univ, Coll Comp & Informat, Philadelphia, PA 19104 USA
[3] Alibaba Grp, Hangzhou 311121, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Fuses; Graph neural networks; Bipartite graph; Gene expression; Task analysis; Cancer; Medical information retrieval; cancer survival prediction; graph neural networks; multimodal; ALZHEIMERS-DISEASE; PROGNOSIS; FUSION; REPRESENTATION; CLASSIFICATION; FRAMEWORK;
D O I
10.1109/TCBB.2021.3083566
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
In recent years, cancer patients survival prediction holds important significance for worldwide health problems, and has gained many researchers attention in medical information communities. Cancer patients survival prediction can be seen the classification work which is a meaningful and challenging task. Nevertheless, research in this field is still limited. In this work, we design a novel Multimodal Graph Neural Network (MGNN)framework for predicting cancer survival, which explores the features of real-world multimodal data such as gene expression, copy number alteration and clinical data in a unified framework. Specifically, we first construct the bipartite graphs between patients and multimodal data to explore the inherent relation. Subsequently, the embedding of each patient on different bipartite graphs is obtained with graph neural network. Finally, a multimodal fusion neural layer is proposed to fuse the medical features from different modality data. Comprehensive experiments have been conducted on real-world datasets, which demonstrate the superiority of our modal with significant improvements against state-of-the-arts. Furthermore, the proposed MGNN is validated to be more robust on other four cancer datasets.
引用
收藏
页码:699 / 709
页数:11
相关论文
共 58 条
[1]  
[Anonymous], 2013, J BioMed Sci Eng, DOI DOI 10.4236/JBISE.2013.65070
[2]  
[Anonymous], 2015, P EMNLP, DOI DOI 10.18653/V1/D15-1166
[3]  
Bal Manjit Singh, 2015, Asian Pac J Cancer Prev, V16, P5107
[4]   Deep Temporal Multimodal Fusion for Medical Procedure Monitoring Using Wearable Sensors [J].
Bernal, Edgar A. ;
Yang, Xitong ;
Li, Qun ;
Kumar, Jayant ;
Madhvanath, Sriganesh ;
Ramesh, Palghat ;
Bala, Raja .
IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (01) :107-118
[5]   Deep learning with multimodal representation for pancancer prognosis prediction [J].
Cheerla, Anika ;
Gevaert, Olivier .
BIOINFORMATICS, 2019, 35 (14) :I446-I454
[6]  
Chen WJ, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2116
[7]   GRAM: Graph-based Attention Model for Healthcare Representation Learning [J].
Choi, Edward ;
Bahadori, Mohammad Taha ;
Song, Le ;
Stewart, Walter F. ;
Sun, Jimeng .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :787-795
[8]   MGNN: A Multimodal Graph Neural Network for Predicting the Survival of Cancer Patients [J].
Gao, Jianliang ;
Lyu, Tengfei ;
Xiong, Fan ;
Wang, Jianxin ;
Ke, Weimao ;
Li, Zhao .
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, :1697-1700
[9]   Efficient Heuristic Methods for Multimodal Fusion and Concept Fusion in Video Concept Detection [J].
Geng, Jie ;
Miao, Zhenjiang ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (04) :498-511
[10]   node2vec: Scalable Feature Learning for Networks [J].
Grover, Aditya ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :855-864