Multi-View Consistency Contrastive Learning With Hard Positives for Sleep Signals

被引:2
作者
Deng, Jiaoxue [1 ,2 ]
Lin, Youfang [1 ,2 ]
Jin, Xiyuan [1 ,2 ]
Ning, Xiaojun [1 ,2 ]
Wang, Jing [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] CAAC Key Lab Intelligent Passenger Serv Civil Avia, Beijing 101318, Peoples R China
关键词
Contrastive learning; multi-view learning; sampling strategy; sleep stage;
D O I
10.1109/LSP.2023.3306612
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Contrastive learning has successfully addressed the scarcity of large-scale labeled datasets, especially in the physiological time series field. Existing methods construct easy positive pairs as substitutes for ground truth based on temporal dynamics or instance consistency. Despite the potential of hard positive samples to provide richer gradient information and facilitate the acquisition of more discriminative representations, they are frequently overlooked in sampling strategies, thus constraining the classification capacity of models. In this letter, we focus on multi-view physiological signals and propose a novel hard positive sampling strategy based on the view consistency. Multi-view signals are recorded from sensors attached to different organs of human body. Additionally, we propose a Multi-View Consistency Contrastive (MVCC) learning framework to jointly extract intra-view temporal dynamics and inter-view consistency features. Experiments have been carried out on two public datasets and our method demonstrates state-of-the-art performance, achieving 83.25% and 73.37% accuracy on SleepEDF and ISRUC, respectively.
引用
收藏
页码:1102 / 1106
页数:5
相关论文
共 50 条
  • [41] mulEEG: A Multi-view Representation Learning on EEG Signals
    Kumar, Vamsi
    Reddy, Likith
    Sharma, Shivam Kumar
    Dadi, Kamalaker
    Yarra, Chiranjeevi
    Bapi, Raju S.
    Rajendran, Srijithesh
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT III, 2022, 13433 : 398 - 407
  • [42] AdaMCL: Adaptive Fusion Multi-View Contrastive Learning for Collaborative Filtering
    Zhu, Guanghui
    Lu, Wang
    Yuan, Chunfeng
    Huang, Yihua
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1076 - 1085
  • [43] A Multi-view Molecular Pre-training with Generative Contrastive Learning
    Liu, Yunwu
    Zhang, Ruisheng
    Yuan, Yongna
    Ma, Jun
    Li, Tongfeng
    Yu, Zhixuan
    INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (03) : 741 - 754
  • [44] Trusted Semi-Supervised Multi-View Classification With Contrastive Learning
    Wang, Xiaoli
    Wang, Yongli
    Wang, Yupeng
    Huang, Anqi
    Liu, Jun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8268 - 8278
  • [45] A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning
    Ke, Guanzhou
    Chao, Guoqing
    Wang, Xiaoli
    Xu, Chenyang
    Zhu, Yongqi
    Yu, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2056 - 2069
  • [46] View-Driven Multi-View Clustering via Contrastive Double-Learning
    Liu, Shengcheng
    Zhu, Changming
    Li, Zishi
    Yang, Zhiyuan
    Gu, Wenjie
    ENTROPY, 2024, 26 (06)
  • [47] Dual-dimensional contrastive learning for incomplete multi-view clustering
    Zhu, Zhengzhong
    Pu, Chujun
    Zhang, Xuejie
    Wang, Jin
    Zhou, Xiaobing
    NEUROCOMPUTING, 2025, 615
  • [48] Consistency and diversity neural network multi-view multi-label learning
    Zhao, Dawei
    Gao, Qingwei
    Lu, Yixiang
    Sun, Dong
    Cheng, Yusheng
    KNOWLEDGE-BASED SYSTEMS, 2021, 218
  • [49] Multi-view feature extraction based on dual contrastive heads
    Zhang, Hongjie
    Jing, Ling
    NEUROCOMPUTING, 2025, 637
  • [50] Subspace-Contrastive Multi-View Clustering
    Fu, Lele
    Huang, Sheng
    Zhang, Lei
    Yang, Jinghua
    Zheng, Zibin
    Zhang, Chuanfu
    Chen, Chuan
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (09)