Autoencoder-assisted latent representation learning for survival prediction and multi-view clustering on multi-omics cancer subtyping

被引:2
作者
Zhu, Shuwei [1 ]
Wang, Wenping [1 ]
Fang, Wei [1 ]
Cui, Meiji [2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Jiangsu Prov Engn Lab Pattern Recognit & Computat, Wuxi 214122, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Intelligent Mfg, Nanjing 210094, Peoples R China
关键词
multi-omic data; cancer subtyping; multi-view clustering; autoencoder; latent space; data integration; ALGORITHM;
D O I
10.3934/mbe.2023933
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Cancer subtyping (or cancer subtypes identification) based on multi-omics data has played an important role in advancing diagnosis, prognosis and treatment, which triggers the development of advanced multi-view clustering algorithms. However, the high-dimension and heterogeneity of multiomics data make great effects on the performance of these methods. In this paper, we propose to learn the informative latent representation based on autoencoder (AE) to naturally capture nonlinear omic features in lower dimensions, which is helpful for identifying the similarity of patients. Moreover, to take advantage of survival information or clinical information, a multi-omic survival analysis approach is embedded when integrating the similarity graph of heterogeneous data at the multi-omics level. Then, the clustering method is performed on the integrated similarity to generate subtype groups. In the experimental part, the effectiveness of the proposed framework is confirmed by evaluating five different multi-omics datasets, taken from The Cancer Genome Atlas. The results show that AEassisted multi-omics clustering method can identify clinically significant cancer subtypes.
引用
收藏
页码:21098 / 21119
页数:22
相关论文
共 50 条
  • [41] Learning missing instances in latent space for incomplete multi-view clustering
    Yu, Zhiqi
    Ye, Mao
    Xiao, Siying
    Tian, Liang
    KNOWLEDGE-BASED SYSTEMS, 2022, 250
  • [42] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning
    Lin, Yijie
    Gou, Yuanbiao
    Liu, Xiaotian
    Bai, Jinfeng
    Lv, Jiancheng
    Peng, Xi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4447 - 4461
  • [43] Multi-view Proximity Learning for Clustering
    Lin, Kun-Yu
    Huang, Ling
    Wang, Chang-Dong
    Chao, Hong-Yang
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2018), PT II, 2018, 10828 : 407 - 423
  • [44] Comprehensive Multi-view Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    Tian, Zhiqiang
    Li, Chen
    INFORMATION FUSION, 2023, 89 : 198 - 209
  • [45] DeepAutoGlioma: a deep learning autoencoder-based multi-omics data integration and classification tools for glioma subtyping
    Munquad, Sana
    Das, Asim Bikas
    BIODATA MINING, 2023, 16 (01)
  • [46] A multi-omics supervised autoencoder for pan-cancer clinical outcome endpoints prediction
    Tan, Kaiwen
    Huang, Weixian
    Hu, Jinlong
    Dong, Shoubin
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (Suppl 3)
  • [47] DeepAutoGlioma: a deep learning autoencoder-based multi-omics data integration and classification tools for glioma subtyping
    Sana Munquad
    Asim Bikas Das
    BioData Mining, 16
  • [48] Multi-View Clustering With Self-Representation and Structural Constraint
    Gao, Xiaowei
    Ma, Xiaoke
    Zhang, Wensheng
    Huang, Jianbin
    Li, He
    Li, Yanni
    Cui, Jiangtao
    IEEE TRANSACTIONS ON BIG DATA, 2021, 8 (04) : 882 - 893
  • [49] Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning
    Li, Bing
    Yuan, Chunfeng
    Xiong, Weihua
    Hu, Weiming
    Peng, Houwen
    Ding, Xinmiao
    Maybank, Steve
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2554 - 2560
  • [50] Consistent graph learning for multi-view spectral clustering
    Xie, Deyan
    Gao, Quanxue
    Zhao, Yougang
    Yang, Fan
    Song, Wei
    PATTERN RECOGNITION, 2024, 154