SAMA: Spatially-Aware Multimodal Network with Attention For Early Lung Cancer Diagnosis

被引:1
|
作者
Roa, Mafe [1 ,2 ]
Daza, Laura [1 ,2 ]
Escobar, Maria [1 ,2 ]
Castillo, Angela [1 ,2 ]
Arbelaez, Pablo [1 ,2 ]
机构
[1] Univ Los Andes, Ctr Res & Format Artificial Intelligence, Bogota, Colombia
[2] Univ Los Andes, Dept Biomed Engn, Bogota 111711, Colombia
来源
MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT | 2021年 / 13050卷
关键词
Multimodal data; Lung cancer; Dynamic filters; Computer aided diagnosis; CT;
D O I
10.1007/978-3-030-89847-2_5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Lung cancer is the deadliest cancer worldwide. This fact has led to increased development of medical and computational methods to improve early diagnosis, aiming at reducing its fatality rate. Radiologists conduct lung cancer screening and diagnosis by localizing and characterizing pathologies. Therefore, there is an inherent relationship between visual clinical findings and spatial location in the images. However, in previous work, this spatial relationship between multimodal data has not been exploited. In this work, we propose a Spatially-Aware Multimodal Network with Attention (SAMA) for early lung cancer diagnosis. Our approach takes advantage of the spatial relationship between visual and clinical information, emulating the diagnostic process of the specialist. Specifically, we propose a multimodal fusion module composed of dynamic filtering of visual features with clinical data followed by a channel attention mechanism. We provide empirical evidence of the potential of SAMA to integrate spatially visual and clinical information. Our method outperforms by 14.3% the state-of-the-art method in the LUng CAncer Screening with Multimodal Biomarkers Dataset.
引用
收藏
页码:48 / 58
页数:11
相关论文
共 50 条
  • [1] SAMA: Spatially-Aware Model-Agnostic Machine Learning Framework for Geophysical Data
    Yamani, Asma Z.
    Katterbaeur, Klemens
    Alshehri, Abdallah A.
    Al-Zaidy, Rabeah A.
    IEEE ACCESS, 2023, 11 : 7436 - 7449
  • [2] A lightweight spatially-aware classification model for breast cancer pathology images
    Jiang, Liang
    Zhang, Cheng
    Zhang, Huan
    Cao, Hui
    BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2024, 44 (03) : 586 - 608
  • [3] Spatially-aware diffusion models with cross-attention for global field reconstruction with sparse observations
    Zhuang, Yilin
    Cheng, Sibo
    Duraisamy, Karthik
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2025, 435
  • [4] Co-Attention Fusion Network for Multimodal Skin Cancer Diagnosis
    He, Xiaoyu
    Wang, Yong
    Zhao, Shuang
    Chen, Xiang
    PATTERN RECOGNITION, 2023, 133
  • [5] Early Diagnosis of Alzheimer's Disease Based on Multimodal Hypergraph Attention Network
    Li, Yi
    Yang, BaoYao
    Pan, Dan
    Zeng, An
    Wu, Long
    Yang, Yang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 192 - 197
  • [6] Multimodal Triplet Attention Network for Brain Disease Diagnosis
    Zhu, Qi
    Wang, Heyang
    Xu, Bingliang
    Zhang, Zhiqiang
    Shao, Wei
    Zhang, Daoqiang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (12) : 3884 - 3894
  • [7] Category-Aware Multimodal Attention Network for Fashion Compatibility Modeling
    Jing, Peiguang
    Cui, Kai
    Guan, Weili
    Nie, Liqiang
    Su, Yuting
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9120 - 9131
  • [8] Multimodal Relationship-aware Attention Network for Fake News Detection
    Yang, Hongyu
    Zhang, Jinjiao
    Hu, Ze
    Zhang, Liang
    Cheng, Xiang
    2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP, 2023, : 143 - 149
  • [9] Stacked Multimodal Attention Network for Context-Aware Video Captioning
    Zheng, Yi
    Zhang, Yuejie
    Feng, Rui
    Zhang, Tao
    Fan, Weiguo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) : 31 - 42
  • [10] MRAN: Multimodal relationship-aware attention network for fake news detection
    Yang, Hongyu
    Zhang, Jinjiao
    Zhang, Liang
    Cheng, Xiang
    Hu, Ze
    COMPUTER STANDARDS & INTERFACES, 2024, 89