Applying masked autoencoder-based self-supervised learning for high-capability vision transformers of electrocardiographies

被引:0
|
作者
Sawano, Shinnosuke [1 ]
Kodera, Satoshi [1 ]
Setoguchi, Naoto [2 ]
Tanabe, Kengo [2 ]
Kushida, Shunichi [3 ]
Kanda, Junji [3 ]
Saji, Mike [4 ]
Nanasato, Mamoru [4 ]
Maki, Hisataka [5 ]
Fujita, Hideo [5 ]
Kato, Nahoko [6 ]
Watanabe, Hiroyuki [6 ]
Suzuki, Minami [7 ]
Takahashi, Masao [7 ]
Sawada, Naoko [8 ]
Yamasaki, Masao [8 ]
Sato, Masataka [1 ]
Katsushika, Susumu [1 ]
Shinohara, Hiroki [1 ]
Takeda, Norifumi [1 ]
Fujiu, Katsuhito [1 ,9 ]
Daimon, Masao [1 ,10 ]
Akazawa, Hiroshi [1 ]
Morita, Hiroyuki [1 ]
Komuro, Issei [1 ]
机构
[1] Univ Tokyo Hosp, Dept Cardiovasc Med, Tokyo, Japan
[2] Mitsui Mem Hosp, Div Cardiol, Tokyo, Japan
[3] Asahi Gen Hosp, Dept Cardiovasc Med, Chiba, Japan
[4] Sakakibara Heart Inst, Dept Cardiol, Tokyo, Japan
[5] Jichi Med Univ, Saitama Med Ctr, Div Cardiovasc Med, Omiya, Japan
[6] Tokyo Bay Med Ctr, Dept Cardiol, Urayasu, Japan
[7] JR Gen Hosp, Dept Cardiol, Tokyo, Japan
[8] NTT Med Ctr Tokyo, Dept Cardiol, Tokyo, Japan
[9] Univ Tokyo, Dept Adv Cardiol, Tokyo, Japan
[10] Univ Tokyo Hosp, Dept Clin Lab, Tokyo, Japan
来源
PLOS ONE | 2024年 / 19卷 / 08期
关键词
VENTRICULAR SYSTOLIC DYSFUNCTION; ECG;
D O I
10.1371/journal.pone.0307978
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The generalization of deep neural network algorithms to a broader population is an important challenge in the medical field. We aimed to apply self-supervised learning using masked autoencoders (MAEs) to improve the performance of the 12-lead electrocardiography (ECG) analysis model using limited ECG data. We pretrained Vision Transformer (ViT) models by reconstructing the masked ECG data with MAE. We fine-tuned this MAE-based ECG pretrained model on ECG-echocardiography data from The University of Tokyo Hospital (UTokyo) for the detection of left ventricular systolic dysfunction (LVSD), and then evaluated it using multi-center external validation data from seven institutions, employing the area under the receiver operating characteristic curve (AUROC) for assessment. We included 38,245 ECG-echocardiography pairs from UTokyo and 229,439 pairs from all institutions. The performances of MAE-based ECG models pretrained using ECG data from UTokyo were significantly higher than that of other Deep Neural Network models across all external validation cohorts (AUROC, 0.913-0.962 for LVSD, p < 0.001). Moreover, we also found improvements for the MAE-based ECG analysis model depending on the model capacity and the amount of training data. Additionally, the MAE-based ECG analysis model maintained high performance even on the ECG benchmark dataset (PTB-XL). Our proposed method developed high performance MAE-based ECG analysis models using limited ECG data.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning
    Xu, Kele
    You, Kang
    Zhu, Boqing
    Feng, Ming
    Feng, Dawei
    Yang, Cheng
    IEEE Open Journal of Engineering in Medicine and Biology, 2024, 5 : 226 - 237
  • [32] New Learning Models Based on the Combination of Variational Graph Autoencoder/Graph Autoencoder with Context Self-supervised Learning for Link Prediction
    Zhang, Jian
    Gao, Yun Hai
    Zhang, Gui Yun
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14875 : 245 - 256
  • [33] Self-Supervised Learning Application on COVID-19 Chest X-ray Image Classification Using Masked AutoEncoder
    Xing, Xin
    Liang, Gongbo
    Wang, Chris
    Jacobs, Nathan
    Lin, Ai-Ling
    BIOENGINEERING-BASEL, 2023, 10 (08):
  • [34] GAF-MAE: A Self-Supervised Automatic Modulation Classification Method Based on Gramian Angular Field and Masked Autoencoder
    Shi, Yunhao
    Xu, Hua
    Zhang, Yue
    Qi, Zisen
    Wang, Dan
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (01) : 94 - 106
  • [35] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Wolf, Daniel
    Payer, Tristan
    Lisson, Catharina Silvia
    Lisson, Christoph Gerhard
    Beer, Meinrad
    Gotz, Michael
    Ropinski, Timo
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [36] A self-supervised learning model based on variational autoencoder for limited-sample mammogram classification
    Karagoz, Meryem Altin
    Nalbantoglu, O. Ufuk
    APPLIED INTELLIGENCE, 2024, 54 (04) : 3448 - 3463
  • [37] On Self-Supervised Learning and Prompt Tuning of Vision Transformers for Cross-sensor Fingerprint Presentation Attack Detection
    Nadeem, Maryam
    Nandakumar, Karthik
    2023 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS, IJCB, 2023,
  • [38] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Daniel Wolf
    Tristan Payer
    Catharina Silvia Lisson
    Christoph Gerhard Lisson
    Meinrad Beer
    Michael Götz
    Timo Ropinski
    Scientific Reports, 13
  • [39] A self-supervised learning model based on variational autoencoder for limited-sample mammogram classification
    Meryem Altin Karagoz
    O. Ufuk Nalbantoglu
    Applied Intelligence, 2024, 54 : 3448 - 3463
  • [40] DISENTANGLED SPEECH REPRESENTATION LEARNING BASED ON FACTORIZED HIERARCHICAL VARIATIONAL AUTOENCODER WITH SELF-SUPERVISED OBJECTIVE
    Xie, Yuying
    Arildsen, Thomas
    Tan, Zheng-Hua
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,