Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

被引:1
|
作者
Xie, Tianshu [1 ]
Yang, Yuhang [2 ]
Ding, Zilin [2 ]
Cheng, Xuan [2 ]
Wang, Xiaomin [2 ]
Gong, Haigang [2 ]
Liu, Ming [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Quzhou, Quzhou 324003, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Wenzhou Med Univ, Quzhou Affiliated Hosp, Quzhou Peoples Hosp, Quzhou 324000, Peoples R China
关键词
Task analysis; Training; Self-supervised learning; Visualization; Supervised learning; Semantics; Predictive models; Deep learning; classification; self-supervised learning; convolutional neural network; feature transformation;
D O I
10.1109/ACCESS.2022.3233104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.
引用
收藏
页码:1708 / 1717
页数:10
相关论文
共 50 条
  • [31] Applications of Self-Supervised Learning to Biomedical Signals: A Survey
    Del Pup, Federico
    Atzori, Manfredo
    IEEE ACCESS, 2023, 11 : 144180 - 144203
  • [32] Experimental Case Study of Self-Supervised Learning for Voice Spoofing Detection
    Lee, Yerin
    Kim, Narin
    Jeong, Jaehong
    Kwak, Il-Youp
    IEEE ACCESS, 2023, 11 : 24216 - 24226
  • [33] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [34] Self-Supervised Seismic Resolution Enhancement
    Cheng, Shijun
    Zhang, Haoran
    Alkhalifah, Tariq
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [35] Downstream Task Agnostic Speech Enhancement with Self-Supervised Representation Loss
    Sato, Hiroshi
    Masumura, Ryo
    Ochiai, Tsubasa
    Delcroix, Marc
    Moriya, Takafumi
    Ashihara, Takanori
    Shinayama, Kentaro
    Mizuno, Saki
    Ihori, Mana
    Tanaka, Tomohiro
    Hojo, Nobukatsu
    INTERSPEECH 2023, 2023, : 854 - 858
  • [36] Self-Supervised Remote Sensing Feature Learning: Learning Paradigms, Challenges, and Future Works
    Tao, Chao
    Qi, Ji
    Guo, Mingning
    Zhu, Qing
    Li, Haifeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [37] Remote sensing image intelligent interpretation: from supervised learning to self-supervised learning
    Tao C.
    Yin Z.
    Zhu Q.
    Li H.
    Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2021, 50 (08): : 1122 - 1134
  • [38] SSL-CPCD: Self-Supervised Learning With Composite Pretext-Class Discrimination for Improved Generalisability in Endoscopic Image Analysis
    Xu, Ziang
    Rittscher, Jens
    Ali, Sharib
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (12) : 4105 - 4119
  • [39] Self-Supervised Speech Representation Learning: A Review
    Mohamed, Abdelrahman
    Lee, Hung-yi
    Borgholt, Lasse
    Havtorn, Jakob D.
    Edin, Joakim
    Igel, Christian
    Kirchhoff, Katrin
    Li, Shang-Wen
    Livescu, Karen
    Maaloe, Lars
    Sainath, Tara N.
    Watanabe, Shinji
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1179 - 1210
  • [40] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155