Multi-modal fusion network with intra- and inter-modality attention for prognosis prediction in breast cancer

被引:6
|
作者
Liu, Honglei [1 ]
Shi, Yi [1 ]
Li, Ao [1 ]
Wang, Minghui [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Multi -modal fusion; Breast cancer; Prognosis prediction; Attention mechanism; INTEGRATIVE ANALYSIS; GENOMIC DATA; FEATURES; IMAGES;
D O I
10.1016/j.compbiomed.2023.107796
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Accurate breast cancer prognosis prediction can help clinicians to develop appropriate treatment plans and improve life quality for patients. Recent prognostic prediction studies suggest that fusing multi-modal data, e.g., genomic data and pathological images, plays a crucial role in improving predictive performance. Despite promising results of existing approaches, there remain challenges in effective multi-modal fusion. First, albeit a powerful fusion technique, Kronecker product produces high-dimensional quadratic expansion of features that may result in high computational cost and overfitting risk, thereby limiting its performance and applicability in cancer prognosis prediction. Second, most existing methods put more attention on learning cross-modality relations between different modalities, ignoring modality-specific relations that are complementary to crossmodality relations and beneficial for cancer prognosis prediction. To address these challenges, in this study we propose a novel attention-based multi-modal network to accurately predict breast cancer prognosis, which efficiently models both modality-specific and cross-modality relations without bringing in high-dimensional features. Specifically, two intra-modality self-attentional modules and an inter-modality cross-attentional module, accompanied by latent space transformation of channel affinity matrix, are developed to successfully capture modality-specific and cross-modality relations for efficient integration of genomic data and pathological images, respectively. Moreover, we design an adaptive fusion block to take full advantage of both modality-specific and cross-modality relations. Comprehensive experiment demonstrates that our method can effectively boost prognosis prediction performance of breast cancer and compare favorably with the state-of-the-art methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Instance-Guided Multi-modal Fake News Detection with Dynamic Intra- and Inter-modality Fusion
    Wang, Jie
    Yang, Yan
    Liu, Keyu
    Xie, Peng
    Liu, Xiaorong
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT I, 2022, 13280 : 510 - 521
  • [2] Modeling Intra and Inter-modality Incongruity for Multi-Modal Sarcasm Detection
    Pan, Hongliang
    Lin, Zheng
    Fu, Peng
    Qi, Yatao
    Wang, Weiping
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1383 - 1392
  • [3] Dynamic Fusion with Intra- and Inter-modality Attention Flow for Visual Question Answering
    Gao, Peng
    Jiang, Zhengkai
    You, Haoxuan
    Lu, Pan
    Hoi, Steven
    Wang, Xiaogang
    Li, Hongsheng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6632 - 6641
  • [4] Emotion recognition from multiple physiological signals using intra- and inter-modality attention fusion network
    Gong, Linlin
    Chen, Wanzhong
    Li, Mingyang
    Zhang, Tao
    DIGITAL SIGNAL PROCESSING, 2024, 144
  • [5] Cross-Modal Image-Recipe Retrieval via Intra- and Inter-Modality Hybrid Fusion
    Li, Jiao
    Sun, Jialiang
    Xu, Xing
    Yu, Wei
    Shen, Fumin
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 173 - 182
  • [6] IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation
    Peng, Chengtao
    Zhang, Yue
    Zheng, Jian
    Li, Bin
    Shen, Jun
    Li, Ming
    Liu, Lei
    Qiu, Bensheng
    Chen, Danny Z.
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2022, 95
  • [7] Fusion of Intra- and Inter-modality Algorithms for Face-Sketch Recognition
    Galea, Christian
    Farrugia, Reuben A.
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, CAIP 2015, PT II, 2015, 9257 : 700 - 711
  • [8] Supervised Intra- and Inter-Modality Similarity Preserving Hashing for Cross-Modal Retrieval
    Chen, Zhikui
    Zhong, Fangming
    Min, Geyong
    Leng, Yonglin
    Ying, Yiming
    IEEE ACCESS, 2018, 6 : 27796 - 27808
  • [9] Deep multi-modal fusion network with gated unit for breast cancer survival prediction
    Yuan, Han
    Xu, Hongzhen
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING, 2024, 27 (07) : 883 - 896
  • [10] INTER-MODALITY FUSION BASED ATTENTION FOR ZERO-SHOT CROSS-MODAL RETRIEVAL
    Chakraborty, Bela
    Wang, Peng
    Wang, Lei
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 2648 - 2652