Multi-View Consistency Contrastive Learning With Hard Positives for Sleep Signals

被引:2
作者
Deng, Jiaoxue [1 ,2 ]
Lin, Youfang [1 ,2 ]
Jin, Xiyuan [1 ,2 ]
Ning, Xiaojun [1 ,2 ]
Wang, Jing [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] CAAC Key Lab Intelligent Passenger Serv Civil Avia, Beijing 101318, Peoples R China
关键词
Contrastive learning; multi-view learning; sampling strategy; sleep stage;
D O I
10.1109/LSP.2023.3306612
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Contrastive learning has successfully addressed the scarcity of large-scale labeled datasets, especially in the physiological time series field. Existing methods construct easy positive pairs as substitutes for ground truth based on temporal dynamics or instance consistency. Despite the potential of hard positive samples to provide richer gradient information and facilitate the acquisition of more discriminative representations, they are frequently overlooked in sampling strategies, thus constraining the classification capacity of models. In this letter, we focus on multi-view physiological signals and propose a novel hard positive sampling strategy based on the view consistency. Multi-view signals are recorded from sensors attached to different organs of human body. Additionally, we propose a Multi-View Consistency Contrastive (MVCC) learning framework to jointly extract intra-view temporal dynamics and inter-view consistency features. Experiments have been carried out on two public datasets and our method demonstrates state-of-the-art performance, achieving 83.25% and 73.37% accuracy on SleepEDF and ISRUC, respectively.
引用
收藏
页码:1102 / 1106
页数:5
相关论文
共 50 条
  • [31] Multi-view Network Embedding with Structure and Semantic Contrastive Learning
    Shang, Yifan
    Ye, Xiucai
    Sakurai, Tetsuya
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 870 - 875
  • [32] Deep multi-view contrastive learning for cancer subtype identification
    Chen, Wenlan
    Wang, Hong
    Liang, Cheng
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (05)
  • [33] MULTI-VIEW SUBSPACE CLUSTERING WITH CONSENSUS GRAPH CONTRASTIVE LEARNING
    Zhang, Jie
    Sun, Yuan
    Guo, Yu
    Wang, Zheng
    Nie, Feiping
    Wang, Fei
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6340 - 6344
  • [34] Multi-view Contrastive Learning for Knowledge-Aware Recommendation
    Yu, Ruiguo
    Li, Zixuan
    Zhao, Mankun
    Zhang, Wenbin
    Yang, Ming
    Yu, Jian
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 211 - 223
  • [35] Sleep Stage Classification Via Multi-View Based Self-Supervised Contrastive Learning of EEG
    Zhao, Chen
    Wu, Wei
    Zhang, Haoyi
    Zhang, Ruiyan
    Zheng, Xinyue
    Kong, Xiangzeng
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7068 - 7077
  • [36] Multi-Behavioral Recommender System Based on Multi-View Contrastive Learning
    Zhang, Haiyang
    Gao, Rong
    Liu, Donghua
    Wan, Xiang
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 437 - 441
  • [37] Separable Consistency and Diversity Feature Learning for Multi-View Clustering
    Zhang, Fenghua
    Che, Hangjun
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1595 - 1599
  • [38] Joint contrastive triple-learning for deep multi-view clustering
    Hu, Shizhe
    Zou, Guoliang
    Zhang, Chaoyang
    Lou, Zhengzheng
    Geng, Ruilin
    Ye, Yangdong
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [39] A Multi-View Double Alignment Hashing Network with Weighted Contrastive Learning
    Zhang, Tianlong
    Xue, Zhe
    Dong, Yuchen
    Du, Junping
    Liang, Meiyu
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [40] Strengthening incomplete multi-view clustering: An attention contrastive learning method
    Hou, Shudong
    Guo, Lanlan
    Wei, Xu
    IMAGE AND VISION COMPUTING, 2025, 157