Cardiovascular disease detection based on deep learning and multi-modal data fusion

被引:2
|
作者
Zhu, Jiayuan [1 ]
Liu, Hui [1 ]
Liu, Xiaowei [1 ]
Chen, Chao [1 ]
Shu, Minglei [1 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
关键词
Data fusion; ECG; PCG; Deep multi-scale network; SVM-RFECV; Feature selection; ECG; SELECTION;
D O I
10.1016/j.bspc.2024.106882
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Electrocardiogram (ECG) and phonocardiogram (PCG) are widely used for early prevention and diagnosis of cardiovascular diseases (CVDs) because they accurately reflect the state of the heart from different perspectives and can be conveniently collected in a non-invasive manner. However, there are few studies using both ECG and PCG for CVD detection, and extracting discriminative features without losing useful information is challenging. In this study, we propose a dual-scale deep residual network (DDR-Net) to automatically extract the features from raw PCG and ECG signals respectively. A dual-scale feature aggregation module is used to integrate low-level features at different scales. We employ SVM-RFECV to select important features and use SVM for the final classification. The proposed method was evaluated on the "training-a"set of 2016 PhysioNet/CinC Challenge database. The experimental results show that the performance of our method is better than that of methods using only ECG or PCG as well as existing multi-modal studies, yielding an accuracy of 91.6% and an AUC value of 0.962. Feature importance of ECG and PCG for CVD detection is analyzed.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [32] CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
    Zheng, Junze
    Xiao, Junyan
    Wang, Yaowei
    Zhang, Xuming
    SENSORS, 2024, 24 (11)
  • [33] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142
  • [34] Multi-modal fusion deep learning model for excavated soil heterogeneous data with efficient classification
    Guo, Qi-Meng
    Zhan, Liang-Tong
    Yin, Zhen-Yu
    Feng, Hang
    Yang, Guang-Qian
    Chen, Yun-Min
    COMPUTERS AND GEOTECHNICS, 2024, 175
  • [35] Exploring Fusion Strategies in Deep Learning Models for Multi-Modal Classification
    Zhang, Duoyi
    Nayak, Richi
    Bashar, Md Abul
    DATA MINING, AUSDM 2021, 2021, 1504 : 102 - 117
  • [36] Correspondence Learning for Deep Multi-Modal Recognition and Fraud Detection
    Park, Jongchan
    Kim, Min-Hyun
    Choi, Dong-Geol
    ELECTRONICS, 2021, 10 (07)
  • [37] Multi-Modal Object Tracking and Image Fusion With Unsupervised Deep Learning
    LaHaye, Nicholas
    Ott, Jordan
    Garay, Michael J.
    El-Askary, Hesham Mohamed
    Linstead, Erik
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2019, 12 (08) : 3056 - 3066
  • [38] Efficient Multi-Modal Image Fusion Deep Learning Techniques for Classifying Neurodegenerative Disease Type
    Joy, Johnsymol
    Selvan, Mercy Paul
    TRAITEMENT DU SIGNAL, 2025, 42 (01) : 267 - 275
  • [39] Multi-Modal Fusion and Longitudinal Analysis for Alzheimer's Disease Classification Using Deep Learning
    Muksimova, Shakhnoza
    Umirzakova, Sabina
    Baltayev, Jushkin
    Cho, Young Im
    DIAGNOSTICS, 2025, 15 (06)
  • [40] Deep learning architectures for Parkinson's disease detection by using multi-modal features
    Pahuja, Gunjan
    Prasad, Bhanu
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 146