Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series

被引:55
作者
Tipirneni, Sindhu [1 ]
Reddy, Chandan K. [1 ]
机构
[1] Virginia Tech, 900 N Glebe Rd, Arlington, VA 22203 USA
基金
美国国家科学基金会;
关键词
Time-series; neural networks; deep learning; healthcare; Transformer; self-supervised learning;
D O I
10.1145/3516367
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multivariate time-series data are frequently observed in critical care settings and are typically characterized by sparsity (missing information) and irregular time intervals. Existing approaches for learning representations in this domain handle these challenges by either aggregation or imputation of values, which in-turn suppresses the fine-grained information and adds undesirable noise/overhead into the machine learning model. To tackle this problem, we propose a Self-supervised Transformer for Time-Series (STraTS) model, which overcomes these pitfalls by treating time-series as a set of observation triplets instead of using the standard dense matrix representation. It employs a novel Continuous Value Embedding technique to encode continuous time and variable values without the need for discretization. It is composed of a Transformer component with multi-head attention layers, which enable it to learn contextual triplet embeddings while avoiding the problems of recurrence and vanishing gradients that occur in recurrent architectures. In addition, to tackle the problem of limited availability of labeled data (which is typically observed in many healthcare applications), STraTS utilizes self-supervision by leveraging unlabeled data to learn better representations by using time-series forecasting as an auxiliary proxy task. Experiments on real-world multivariate clinical time-series benchmark datasets demonstrate that STraTS has better prediction performance than state-of-the-art methods for mortality prediction, especially when labeled data is limited. Finally, we also present an interpretable version of STraTS, which can identify important measurements in the time-series data. Our data preprocessing and model implementation codes are available at https://github.com/sindhura97/STraTS.
引用
收藏
页数:17
相关论文
共 50 条
[21]   Mixing up contrastive learning: Self-supervised representation learning for time series [J].
Wickstrom, Kristoffer ;
Kampffmeyer, Michael ;
Mikalsen, Karl Oyvind ;
Jenssen, Robert .
PATTERN RECOGNITION LETTERS, 2022, 155 :54-61
[22]   CARLA: Self-supervised contrastive representation learning for time series anomaly detection [J].
Darban, Zahra Zamanzadeh ;
Webb, Geoffrey I. ;
Pan, Shirui ;
Aggarwal, Charu C. ;
Salehi, Mahsa .
PATTERN RECOGNITION, 2025, 157
[23]   A transformer-based self-supervised pre-training model for time series prediction [J].
Sun, Zhengrong ;
Zhai, Junhai ;
Cao, Yang ;
Zhang, Feng .
APPLIED SOFT COMPUTING, 2025, 181
[24]   Variational Graph Attention Networks With Self-Supervised Learning for Multivariate Time Series Anomaly Detection [J].
Gao, Yu ;
Qi, Jin ;
Ye, Hongjiang ;
Sun, Ying ;
Hu, Xiaoxuan ;
Dong, Zhenjiang ;
Sun, Yanfei .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
[25]   Classification of Multivariate Time Series Signals Using Self-Supervised Representation Learning for Condition Monitoring [J].
Sumiya, Koichi ;
Arima, Sumika .
2024 INTERNATIONAL SYMPOSIUM ON SEMICONDUCTOR MANUFACTURING, ISSM, 2024,
[26]   CroSSL: Cross-modal Self-Supervised Learning for Time-series through Latent Masking [J].
Deldari, Shohreh ;
Spathis, Dimitris ;
Malekzadeh, Mohammad ;
Kawsar, Fahim ;
Salim, Flora D. ;
Mathur, Akhil .
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, :152-160
[27]   Multimodal Image Fusion via Self-Supervised Transformer [J].
Zhang, Jing ;
Liu, Yu ;
Liu, Aiping ;
Xie, Qingguo ;
Ward, Rabab ;
Wang, Z. Jane ;
Chen, Xun .
IEEE SENSORS JOURNAL, 2023, 23 (09) :9796-9807
[28]   Patch-Wise-Based Self-Supervised Learning for Anomaly Detection on Multivariate Time Series Data [J].
Oh, Seungmin ;
Anh, Le Hoang ;
Vu, Dang Thanh ;
Yu, Gwang Hyun ;
Hahn, Minsoo ;
Kim, Jinsul .
MATHEMATICS, 2024, 12 (24)
[29]   Automation of takeoff data for aviation services using self-supervised LSTM approaches with time-series prediction [J].
Shankar, Anand ;
Sarthi, Pradhan Parth ;
Singh, Deepak Kumar ;
Kumar, Mantosh ;
Kumar, Pankaj .
MODELING EARTH SYSTEMS AND ENVIRONMENT, 2024, 10 (04) :5409-5425
[30]   Time-series forecasting of mortality rates using transformer [J].
Wang, Jun ;
Wen, Lihong ;
Xiao, Lu ;
Wang, Chaojie .
SCANDINAVIAN ACTUARIAL JOURNAL, 2024, 2024 (02) :109-123