Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series

被引:55
作者
Tipirneni, Sindhu [1 ]
Reddy, Chandan K. [1 ]
机构
[1] Virginia Tech, 900 N Glebe Rd, Arlington, VA 22203 USA
基金
美国国家科学基金会;
关键词
Time-series; neural networks; deep learning; healthcare; Transformer; self-supervised learning;
D O I
10.1145/3516367
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multivariate time-series data are frequently observed in critical care settings and are typically characterized by sparsity (missing information) and irregular time intervals. Existing approaches for learning representations in this domain handle these challenges by either aggregation or imputation of values, which in-turn suppresses the fine-grained information and adds undesirable noise/overhead into the machine learning model. To tackle this problem, we propose a Self-supervised Transformer for Time-Series (STraTS) model, which overcomes these pitfalls by treating time-series as a set of observation triplets instead of using the standard dense matrix representation. It employs a novel Continuous Value Embedding technique to encode continuous time and variable values without the need for discretization. It is composed of a Transformer component with multi-head attention layers, which enable it to learn contextual triplet embeddings while avoiding the problems of recurrence and vanishing gradients that occur in recurrent architectures. In addition, to tackle the problem of limited availability of labeled data (which is typically observed in many healthcare applications), STraTS utilizes self-supervision by leveraging unlabeled data to learn better representations by using time-series forecasting as an auxiliary proxy task. Experiments on real-world multivariate clinical time-series benchmark datasets demonstrate that STraTS has better prediction performance than state-of-the-art methods for mortality prediction, especially when labeled data is limited. Finally, we also present an interpretable version of STraTS, which can identify important measurements in the time-series data. Our data preprocessing and model implementation codes are available at https://github.com/sindhura97/STraTS.
引用
收藏
页数:17
相关论文
共 50 条
[41]   Self-supervised multimodal fusion transformer for passive activity recognition [J].
Koupai, Armand K. ;
Bocus, Mohammud J. ;
Santos-Rodriguez, Raul ;
Piechocki, Robert J. ;
McConville, Ryan .
IET WIRELESS SENSOR SYSTEMS, 2022, 12 (5-6) :149-160
[42]   Self-Supervised Masked Convolutional Transformer Block for Anomaly Detection [J].
Madan, Neelu ;
Ristea, Nicolae-Catalin ;
Ionescu, Radu Tudor ;
Nasrollahi, Kamal ;
Khan, Fahad Shahbaz ;
Moeslund, Thomas B. ;
Shah, Mubarak .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) :525-542
[43]   Multi-dimensional Learner Profiling by Modeling Irregular Multivariate Time Series with Self-supervised Deep Learning [J].
Xiao, Qian ;
Pitt, Breanne ;
Johnston, Keith ;
Wade, Vincent .
ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2023, 2023, 13916 :674-680
[44]   Self-supervised deep contrastive and auto-regressive domain adaptation for time-series based on channel recalibration [J].
Yang, Guangju ;
Luo, Tian-jian ;
Zhang, Xiaochen .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 145
[45]   Self-Supervised Hypergraph Transformer for Recommender Systems [J].
Xia, Lianghao ;
Huang, Chao ;
Zhang, Chuxu .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :2100-2109
[46]   A Recommendation Algorithm Based on a Self-supervised Learning Pretrain Transformer [J].
Yu-Hao Xu ;
Zhen-Hai Wang ;
Zhi-Ru Wang ;
Rong Fan ;
Xing Wang .
Neural Processing Letters, 2023, 55 :4481-4497
[47]   A Recommendation Algorithm Based on a Self-supervised Learning Pretrain Transformer [J].
Xu, Yu-Hao ;
Wang, Zhen-Hai ;
Wang, Zhi-Ru ;
Fan, Rong ;
Wang, Xing .
NEURAL PROCESSING LETTERS, 2023, 55 (04) :4481-4497
[48]   Self-Supervised Graph Transformer for Deepfake Detection [J].
Khormali, Aminollah ;
Yuan, Jiann-Shiun .
IEEE ACCESS, 2024, 12 :58114-58127
[49]   SELF-SUPERVISED SPATIO-TEMPORAL REPRESENTATION LEARNING OF SATELLITE IMAGE TIME SERIES [J].
Dumeur, Iris ;
Valero, Silvia ;
Inglada, Jordi .
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, :642-645
[50]   A DEEP LEARNING ARCHITECTURE FOR HETEROGENEOUS AND IRREGULARLY SAMPLED REMOTE SENSING TIME SERIES [J].
Avolio, Corrado ;
Tricomi, Alessia ;
Mammone, Claudio ;
Zavagli, Massimo ;
Costantini, Mario .
2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, :9807-9810