CroSSL: Cross-modal Self-Supervised Learning for Time-series through Latent Masking

被引:6
作者
Deldari, Shohreh [1 ,2 ]
Spathis, Dimitris [2 ]
Malekzadeh, Mohammad [2 ]
Kawsar, Fahim [2 ]
Salim, Flora D. [1 ]
Mathur, Akhil [2 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Nokia Bell Labs, Cambridge, England
来源
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024 | 2024年
关键词
Self-supervised learning; Representation learning; Cross-modal;
D O I
10.1145/3616855.3635795
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Limited availability of labeled data for machine learning on multi-modal time-series extensively hampers progress in the. eld. Self-supervised learning (SSL) is a promising approach to learning data representations without relying on labels. However, existing SSL methods require expensive computations of negative pairs and are typically designed for single modalities, which limits their versatility. We introduce CroSSL (Cross-modal SSL), which puts forward two novel concepts: masking intermediate embeddings produced by modality-speci.c encoders, and their aggregation into a global embedding through a cross-modal aggregator that can be fed to downstream classi.ers. CroSSL allows for handling missing modalities and end-to-end cross-modal learning without requiring prior data preprocessing for handling missing inputs or negative-pair sampling for contrastive learning. We evaluate our method on a wide range of data, including motion sensors such as accelerometers or gyroscopes and biosignals (heart rate, electroencephalograms, electromyograms, electrooculograms, and electrodermal) to investigate the impact of masking ratios and masking strategies for various data types and the robustness of the learned representations to missing data. Overall, CroSSL outperforms previous SSL and supervised benchmarks using minimal labeled data, and also sheds light on how latent masking can improve cross-modal learning. Our code is open-sourced at https://github.com/dr-bell/CroSSL
引用
收藏
页码:152 / 160
页数:9
相关论文
共 39 条
[1]   MultiMAE: Multi-modal Multi-task Masked Autoencoders [J].
Bachmann, Roman ;
Mizrahi, David ;
Atanov, Andrei ;
Zamir, Amir .
COMPUTER VISION, ECCV 2022, PT XXXVII, 2022, 13697 :348-367
[2]  
Baevski A, 2020, ADV NEUR IN, V33
[3]  
Bardes A., 2022, INT C LEARN REPR
[4]  
Chen T, 2020, PR MACH LEARN RES, V119
[5]  
Chuang C., 2020, ADV NEUR IN, V33
[6]  
Deldari S., 2022, ARXIV
[7]   Time Series Change Point Detection with Self-Supervised Contrastive Predictive Coding [J].
Deldari, Shohreh ;
Smith, Daniel, V ;
Xue, Hao ;
Salim, Flora D. .
PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, :3124-3135
[8]  
Federici Marco, 2020, INT C LEARN REPR ICL
[9]   PhysioBank, PhysioToolkit, and PhysioNet - Components of a new research resource for complex physiologic signals [J].
Goldberger, AL ;
Amaral, LAN ;
Glass, L ;
Hausdorff, JM ;
Ivanov, PC ;
Mark, RG ;
Mietus, JE ;
Moody, GB ;
Peng, CK ;
Stanley, HE .
CIRCULATION, 2000, 101 (23) :E215-E220
[10]  
Grill Jean-Bastien, 2020, NeurIPS