Deep Contrastive Representation Learning With Self-Distillation

被引:85
|
作者
Xiao, Zhiwen [1 ,2 ,3 ]
Xing, Huanlai [1 ,2 ,3 ]
Zhao, Bowen [1 ,2 ,3 ]
Qu, Rong [4 ]
Luo, Shouxi [1 ,2 ,3 ]
Dai, Penglin [1 ,2 ,3 ]
Li, Ke [1 ,2 ,3 ]
Zhu, Zonghai [1 ,2 ,3 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 610031, Peoples R China
[2] Southwest Jiaotong Univ, Tangshan Inst, Tangshan 063000, Peoples R China
[3] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Beijing, Peoples R China
[4] Univ Nottingham, Sch Comp Sci, Nottingham NG7 2RD, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; knowledge distillation; representation learning; time series classification; time series clustering; SERIES CLASSIFICATION;
D O I
10.1109/TETCI.2023.3304948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels. Low-level semantic information is essential and should be considered in the CL process. However, the existing CL algorithms mainly focus on the similarity of high-level semantic information. Considering the similarity of low-level semantic information may improve the performance of CL. To this end, we present a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain. DCRLS gracefully combine data augmentation, deep contrastive learning, and self distillation. Our data augmentation provides different views from the same sample as the input of DCRLS. Unlike most CL algorithms that concentrate on high-level semantic information only, our deep contrastive learning also considers the contrast similarity of low-level semantic information between peer residual blocks. Our self distillation promotes knowledge flow from high-level to low-level blocks to help regularize DCRLS in the knowledge transfer process. The experimental results demonstrate that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 50 条
  • [31] Self-distillation with model averaging
    Gu, Xiaozhe
    Zhang, Zixun
    Jin, Ran
    Goh, Rick Siow Mong
    Luo, Tao
    INFORMATION SCIENCES, 2025, 694
  • [32] Enhancing learning on uncertain pixels in self-distillation for object segmentation
    Chen, Lei
    Cao, Tieyong
    Zheng, Yunfei
    Wang, Yang
    Zhang, Bo
    Yang, Jibin
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (05) : 6545 - 6557
  • [33] Probabilistic online self-distillation
    Tzelepi, Maria
    Passalis, Nikolaos
    Tefas, Anastasios
    NEUROCOMPUTING, 2022, 493 : 592 - 604
  • [34] Eliminating Primacy Bias in Online Reinforcement Learning by Self-Distillation
    Li, Jingchen
    Shi, Haobin
    Wu, Huarui
    Zhao, Chunjiang
    Hwang, Kao-Shing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [35] Class Incremental Learning With Deep Contrastive Learning and Attention Distillation
    Zhu, Jitao
    Luo, Guibo
    Duan, Baishan
    Zhu, Yuesheng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1224 - 1228
  • [36] Iterative Graph Self-Distillation
    Zhang, Hanlin
    Lin, Shuai
    Liu, Weiyang
    Zhou, Pan
    Tang, Jian
    Liang, Xiaodan
    Xing, Eric P.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (03) : 1161 - 1169
  • [37] Data-Distortion Guided Self-Distillation for Deep Neural Networks
    Xu, Ting-Bing
    Liu, Cheng-Lin
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5565 - 5572
  • [38] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [39] Self-distillation improves self-supervised learning for DNA sequence inference
    Yu, Tong
    Cheng, Lei
    Khalitov, Ruslan
    Olsson, Erland B.
    Yang, Zhirong
    NEURAL NETWORKS, 2025, 183
  • [40] Modality-Aware Contrastive Instance Learning with Self-Distillation for Weakly-Supervised Audio-Visual Violence Detection
    Yu, Jiashuo
    Liu, Jinyu
    Cheng, Ying
    Feng, Rui
    Zhang, Yuejie
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6278 - 6287