Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

被引:1
|
作者
Xie, Tianshu [1 ]
Yang, Yuhang [2 ]
Ding, Zilin [2 ]
Cheng, Xuan [2 ]
Wang, Xiaomin [2 ]
Gong, Haigang [2 ]
Liu, Ming [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Quzhou, Quzhou 324003, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Wenzhou Med Univ, Quzhou Affiliated Hosp, Quzhou Peoples Hosp, Quzhou 324000, Peoples R China
关键词
Task analysis; Training; Self-supervised learning; Visualization; Supervised learning; Semantics; Predictive models; Deep learning; classification; self-supervised learning; convolutional neural network; feature transformation;
D O I
10.1109/ACCESS.2022.3233104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.
引用
收藏
页码:1708 / 1717
页数:10
相关论文
共 50 条
  • [1] Mixup Feature: A Pretext Task Self-Supervised Learning Method for Enhanced Visual Feature Learning
    Xu, Jiashu
    Stirenko, Sergii
    IEEE ACCESS, 2023, 11 : 82400 - 82409
  • [2] Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey
    Jing, Longlong
    Tian, Yingli
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (11) : 4037 - 4058
  • [3] Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
    Huang, Lang
    Zhang, Chao
    Zhang, Hongyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1362 - 1377
  • [4] Organoids Segmentation using Self-Supervised Learning: How Complex Should the Pretext Task Be?
    Haja, Asmaa
    van der Woude, Bart
    Schomaker, Lambert
    2023 10TH INTERNATIONAL CONFERENCE ON BIOMEDICAL AND BIOINFORMATICS ENGINEERING, ICBBE 2023, 2023, : 17 - 27
  • [5] Pretext Tasks Selection for Multitask Self-Supervised Audio Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    Heba, Abdelwahab
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1439 - 1453
  • [6] Skin lesion classification based on hybrid self-supervised pretext task
    Yang, Dedong
    Zhang, Jianwen
    Li, Yangyang
    Ling, Zhiquan
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (02)
  • [7] Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2021, 2021, : 2851 - 2855
  • [8] Continual Robot Learning Using Self-Supervised Task Inference
    Hafez, Muhammad Burhan
    Wermter, Stefan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 947 - 960
  • [9] Evolved Hierarchical Masking for Self-Supervised Learning
    Feng, Zhanzhou
    Zhang, Shiliang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 1013 - 1027
  • [10] Graph Self-Supervised Learning: A Survey
    Liu, Yixin
    Jin, Ming
    Pan, Shirui
    Zhou, Chuan
    Zheng, Yu
    Xia, Feng
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 5879 - 5900