Deep Contrastive Representation Learning With Self-Distillation

被引:85
|
作者
Xiao, Zhiwen [1 ,2 ,3 ]
Xing, Huanlai [1 ,2 ,3 ]
Zhao, Bowen [1 ,2 ,3 ]
Qu, Rong [4 ]
Luo, Shouxi [1 ,2 ,3 ]
Dai, Penglin [1 ,2 ,3 ]
Li, Ke [1 ,2 ,3 ]
Zhu, Zonghai [1 ,2 ,3 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 610031, Peoples R China
[2] Southwest Jiaotong Univ, Tangshan Inst, Tangshan 063000, Peoples R China
[3] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Beijing, Peoples R China
[4] Univ Nottingham, Sch Comp Sci, Nottingham NG7 2RD, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; knowledge distillation; representation learning; time series classification; time series clustering; SERIES CLASSIFICATION;
D O I
10.1109/TETCI.2023.3304948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels. Low-level semantic information is essential and should be considered in the CL process. However, the existing CL algorithms mainly focus on the similarity of high-level semantic information. Considering the similarity of low-level semantic information may improve the performance of CL. To this end, we present a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain. DCRLS gracefully combine data augmentation, deep contrastive learning, and self distillation. Our data augmentation provides different views from the same sample as the input of DCRLS. Unlike most CL algorithms that concentrate on high-level semantic information only, our deep contrastive learning also considers the contrast similarity of low-level semantic information between peer residual blocks. Our self distillation promotes knowledge flow from high-level to low-level blocks to help regularize DCRLS in the knowledge transfer process. The experimental results demonstrate that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 50 条
  • [41] A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer
    Dong, Chi
    Wu, Yujiao
    Sun, Bo
    Bo, Jiayi
    Huang, Yufei
    Geng, Yikang
    Zhang, Qianhui
    Liu, Ruixiang
    Guo, Wei
    Wang, Xingling
    Jiang, Xiran
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2025, 119
  • [42] Wasserstein Contrastive Representation Distillation
    Chen, Liqun
    Wang, Dong
    Gan, Zhe
    Liu, Jingjing
    Henao, Ricardo
    Carin, Lawrence
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16291 - 16300
  • [43] Deep contrastive representation learning for supervised tasks
    Duan, Chenguang
    Jiao, Yuling
    Kang, Lican
    Yang, Jerry Zhijian
    Zhou, Fusheng
    PATTERN RECOGNITION, 2025, 161
  • [44] Self-Distillation Feature Learning Network for Optical and SAR Image Registration
    Quan, Dou
    Wei, Huiyuan
    Wang, Shuang
    Lei, Ruiqi
    Duan, Baorui
    Li, Yi
    Hou, Biao
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [45] Semantic Super-Resolution via Self-Distillation and Adversarial Learning
    Park, Hanhoon
    IEEE ACCESS, 2024, 12 : 2361 - 2370
  • [46] Bayesian Optimization Meets Self-Distillation
    Lee, HyunJae
    Song, Heon
    Lee, Hyeonsoo
    Lee, Gi-hyeon
    Park, Suyeong
    Yoo, Donggeun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1696 - 1705
  • [47] Tolerant Self-Distillation for image classification
    Liu, Mushui
    Yu, Yunlong
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    NEURAL NETWORKS, 2024, 174
  • [48] Restructuring the Teacher and Student in Self-Distillation
    Zheng, Yujie
    Wang, Chong
    Tao, Chenchen
    Lin, Sunqi
    Qian, Jiangbo
    Wu, Jiafei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 5551 - 5563
  • [49] Self-distillation for Surgical Action Recognition
    Yamlahi, Amine
    Thuy Nuong Tran
    Godau, Patrick
    Schellenberg, Melanie
    Michael, Dominik
    Smidt, Finn-Henri
    Noelke, Jan-Hinrich
    Adler, Tim J.
    Tizabi, Minu Dietlinde
    Nwoye, Chinedu Innocent
    Padoy, Nicolas
    Maier-Hein, Lena
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IX, 2023, 14228 : 637 - 646
  • [50] Future Augmentation with Self-distillation in Recommendation
    Liu, Chong
    Xie, Ruobing
    Liu, Xiaoyang
    Wang, Pinzheng
    Zheng, Rongqin
    Zhang, Lixin
    Li, Juntao
    Xia, Feng
    Lin, Leyu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 602 - 618