Scaling Representation Learning From Ubiquitous ECG With State-Space Models

被引:0
|
作者
Avramidis, Kleanthis [1 ]
Kunc, Dominika [2 ]
Perz, Bartosz [2 ]
Adsul, Kranti [1 ]
Feng, Tiantian [1 ]
Kazienko, Przemyslaw [2 ]
Saganowski, Stanislaw [2 ]
Narayanan, Shrikanth [1 ]
机构
[1] Univ Southern Calif, Viterbi Sch Engn, Los Angeles, CA 90089 USA
[2] Wroclaw Univ Sci & Technol, Dept Artificial Intelligence, PL-50370 Wroclaw, Poland
关键词
Electrocardiography; Task analysis; Biological system modeling; Data models; Bioinformatics; Training; State-space methods; ubiquitous computing; self-supervised learning; state-space models; ARTIFICIAL-INTELLIGENCE; ELECTROCARDIOGRAM; CLASSIFICATION; EMOTION;
D O I
10.1109/JBHI.2024.3416897
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Ubiquitous sensing from wearable devices in the wild holds promise for enhancing human well-being, from diagnosing clinical conditions and measuring stress to building adaptive health promoting scaffolds. But the large volumes of data therein across heterogeneous contexts pose challenges for conventional supervised learning approaches. Representation Learning from biological signals is an emerging realm catalyzed by the recent advances in computational modeling and the abundance of publicly shared databases. The electrocardiogram (ECG) is the primary researched modality in this context, with applications in health monitoring, stress and affect estimation. Yet, most studies are limited by small-scale controlled data collection and over-parameterized architecture choices. We introduce <bold>WildECG</bold>, a pre-trained state-space model for representation learning from ECG signals. We train this model in a self-supervised manner with 275 000 10 s ECG recordings collected in the wild and evaluate it on a range of downstream tasks. The proposed model is a robust backbone for ECG analysis, providing competitive performance on most of the tasks considered, while demonstrating efficacy in low-resource regimes.
引用
收藏
页码:5877 / 5889
页数:13
相关论文
共 50 条
  • [1] Variational learning for switching state-space models
    Ghahramani, Z
    Hinton, GE
    NEURAL COMPUTATION, 2000, 12 (04) : 831 - 864
  • [2] Learning nonlinear state-space models for control
    Raiko, T
    Tornio, M
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 815 - 820
  • [3] Efficient state-space representation by neural maps for reinforcement learning
    Herrmann, M
    Der, R
    CLASSIFICATION IN THE INFORMATION AGE, 1999, : 302 - 309
  • [4] Learning nonlinear state-space models using autoencoders
    Masti, Daniele
    Bemporad, Alberto
    AUTOMATICA, 2021, 129
  • [5] Latent Matters: Learning Deep State-Space Models
    Klushyn, Alexej
    Kurle, Richard
    Soelch, Maximilian
    Cseke, Botond
    van der Smagt, Patrick
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] From general state-space to VARMAX models
    Casals, J.
    Garcia-Hiernaux, A.
    Jerez, M.
    MATHEMATICS AND COMPUTERS IN SIMULATION, 2012, 82 (05) : 924 - 936
  • [7] MODEL-SPECIFICATION TESTS FOR BALANCED REPRESENTATION STATE-SPACE MODELS
    DORFMAN, JH
    HAVENNER, A
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 1995, 24 (01) : 97 - 119
  • [8] A STATE-SPACE REPRESENTATION OF SEQUENTIAL ESTIMATORS
    AMBLER, S
    ECONOMICS LETTERS, 1990, 34 (03) : 249 - 253
  • [9] State-space representation for CRONE controllers
    Raynaud, HF
    Zergaïnoh, A
    SYSTEM STRUCTURE AND CONTROL 1998 (SSC'98), VOLS 1 AND 2, 1998, : 293 - 298
  • [10] STATE-SPACE REPRESENTATION OF PETRI NETS
    HURA, GS
    MICROELECTRONICS AND RELIABILITY, 1984, 24 (05): : 865 - 868