Global and cross-modal feature aggregation for multi-omics data classification and on

被引:10
|
作者
Zheng, Xiao [1 ]
Wang, Minhui [2 ]
Huang, Kai [3 ]
Zhu, En [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha 410073, Peoples R China
[2] Nanjing Med Univ, Kangda Coll, Lianshui Peoples Hosp, Dept Pharm, Huaian 223300, Peoples R China
[3] Huazhong Univ Sci & Technol, Union Hosp, Tongji Med Coll, Clin Ctr Human Gene Res, Wuhan 430030, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-omics data classification; Multi-modal learning; Cross-modal fusion; Contrastive learning; NETWORK; FUSION; GRAPH; MULTIMODALITY; PREDICTION; BIOLOGY;
D O I
10.1016/j.inffus.2023.102077
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With rapid development of single-cell multi-modal sequencing technologies, more and more multi-omics data come into being and provide a unique opportunity for the identification of distinct cell types at the single-cell level. Therefore, it is important to integrate different modalities which are with high-dimensional features for boosting final multi-omics data classification performance. However, existing multi-omics data classification methods mainly focus on exploiting the complementary information of different modalities, while ignoring the learning confidence and cross-modal sample relationship during information fusion. In this paper, we propose a multi-omics data classification network via global and cross-modal feature aggregation, referred to as GCFANet. On one hand, considering that a large number of feature dimensions in different modalities could not contribute to final classification performance but disturb the discriminability of different samples, we propose a feature confidence learning mechanism to suppress some redundant features, as well as enhancing the expression of discriminative feature dimensions in each modality. On the other hand, in order to capture the inherent sample structure information implied in each modality, we design a graph convolutional network branch to learn the corresponding structure preserved feature representation. Then the modal-specific feature representations are concatenated and input to a transformer induced global and cross-modal feature aggregation module for learning consensus feature representation from different modalities. In addition, the consensus feature representation used for final classification is enhanced via a view-specific consistency preserved contrastive learning strategy. Extensive experiments on four multi-omics datasets are conducted to demonstrate the efficacy of the proposed GCFANet.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Multispectral Scene Classification via Cross-Modal Knowledge Distillation
    Liu, Hao
    Qu, Ying
    Zhang, Liqiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [42] Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection
    Chen, Bojian
    Wu, Wenbin
    Li, Zhezhou
    Han, Tengfei
    Chen, Zhuolei
    Zhang, Weihao
    ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (01): : 643 - 669
  • [43] Alzheimer's disease prediction based on continuous feature representation using multi-omics data integration
    Abbas, Zeeshan
    Tayara, Hilal
    Chong, Kil To
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2022, 223
  • [44] A Short Video Classification Framework Based on Cross-Modal Fusion
    Pang, Nuo
    Guo, Songlin
    Yan, Ming
    Chan, Chien Aun
    SENSORS, 2023, 23 (20)
  • [45] Cross-modal incongruity aligning and collaborating for multi-modal sarcasm detection
    Wang, Jie
    Yang, Yan
    Jiang, Yongquan
    Ma, Minbo
    Xie, Zhuyang
    Li, Tianrui
    INFORMATION FUSION, 2024, 103
  • [46] Knowledge-guided learning methods for integrative analysis of multi-omics data
    Li, Wenrui
    Ballard, Jenna
    Zhao, Yize
    Long, Qi
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 23 : 1945 - 1950
  • [47] A Cross-Modal Feature Fusion Model Based on ConvNeXt for RGB-D Semantic Segmentation
    Tang, Xiaojiang
    Li, Baoxia
    Guo, Junwei
    Chen, Wenzhuo
    Zhang, Dan
    Huang, Feng
    MATHEMATICS, 2023, 11 (08)
  • [48] How to interpret and integrate multi-omics data at systems level
    Jung, Gun Tae
    Kim, Kwang-Pyo
    Kim, Kwoneel
    ANIMAL CELLS AND SYSTEMS, 2020, 24 (01) : 1 - 7
  • [49] Integration strategies of multi-omics data for machine learning analysis
    Picard M.
    Scott-Boyer M.-P.
    Bodein A.
    Périn O.
    Droit A.
    Computational and Structural Biotechnology Journal, 2021, 19 : 3735 - 3746
  • [50] An integrated multi-omics analysis reveals osteokines involved in global regulation
    Liang, Wenquan
    Wei, Tiantian
    Hu, Le
    Chen, Meijun
    Tong, Liping
    Zhou, Wu
    Duan, Xingwei
    Zhao, Xiaoyang
    Zhou, Weijie
    Jiang, Qing
    Xiao, Guozhi
    Zou, Weiguo
    Chen, Di
    Zou, Zhipeng
    Bai, Xiaochun
    CELL METABOLISM, 2024, 36 (05) : 1144 - 1163.e7