Accelerating Self-Supervised Learning via Efficient Training Strategies

被引:2
|
作者
Kocyigit, Mustafa Taha [1 ]
Hospedales, Timothy M. [1 ]
Bilen, Hakan [1 ]
机构
[1] Univ Edinburgh, Edinburgh, Midlothian, Scotland
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
关键词
D O I
10.1109/WACV56688.2023.00561
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently the focus of the computer vision community has shifted from expensive supervised learning towards selfsupervised learning of visual representations. While the performance gap between supervised and self-supervised has been narrowing, the time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts, which hinders progress, imposes carbon cost, and limits societal benefits to institutions with substantial resources. Motivated by these issues, this paper investigates reducing the training time of recent selfsupervised methods by various model-agnostic strategies that have not been used for this problem. In particular, we study three strategies: an extendable cyclic learning rate schedule, a matching progressive augmentation magnitude and image resolutions schedule, and a hard positive mining strategy based on augmentation difficulty. We show that all three methods combined lead up to 2.7 times speed-up in the training time of several self-supervised methods while retaining comparable performance to the standard self-supervised learning setting.
引用
收藏
页码:5643 / 5653
页数:11
相关论文
共 50 条
  • [1] Efficient Medical Image Assessment via Self-supervised Learning
    Huang, Chun-Yin
    Lei, Qi
    Li, Xiaoxiao
    DATA AUGMENTATION, LABELLING, AND IMPERFECTIONS (DALI 2022), 2022, 13567 : 102 - 111
  • [2] Learning online visual invariances for novel objects via supervised and self-supervised training
    Biscione, Valerio
    Bowers, Jeffrey S.
    NEURAL NETWORKS, 2022, 150 : 222 - 236
  • [3] METRICBERT: TEXT REPRESENTATION LEARNING VIA SELF-SUPERVISED TRIPLET TRAINING
    Malkiel, Itzik
    Ginzburg, Dvir
    Barkan, Oren
    Caciularu, Avi
    Weill, Yoni
    Koenigstein, Noam
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8142 - 8146
  • [4] Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
    Huang, Lang
    Zhang, Chao
    Zhang, Hongyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1362 - 1377
  • [5] Efficient DDPG via the Self-Supervised Method
    Zhang, Guanghao
    Chen, Hongliang
    Li, Jianxun
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 4636 - 4642
  • [6] A self-supervised deep learning method for data-efficient training in genomics
    Guenduez, Hueseyin Anil
    Binder, Martin
    To, Xiao-Yin
    Mreches, Rene
    Bischl, Bernd
    McHardy, Alice C.
    Muench, Philipp C.
    Rezaei, Mina
    COMMUNICATIONS BIOLOGY, 2023, 6 (01)
  • [7] A self-supervised deep learning method for data-efficient training in genomics
    Hüseyin Anil Gündüz
    Martin Binder
    Xiao-Yin To
    René Mreches
    Bernd Bischl
    Alice C. McHardy
    Philipp C. Münch
    Mina Rezaei
    Communications Biology, 6
  • [8] A Self-Supervised Learning Approach for Accelerating Wireless Network Optimization
    Zhang, Shuai
    Ajayi, Oluwaseun T.
    Cheng, Yu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (06) : 8074 - 8087
  • [9] Unsupervised Few-Shot Feature Learning via Self-Supervised Training
    Ji, Zilong
    Zou, Xiaolong
    Huang, Tiejun
    Wu, Si
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14
  • [10] Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications
    Tariq, Amara
    Ramasamy, Gokul
    Patel, Bhavik
    Banerjee, Imon
    JOURNAL OF MEDICAL IMAGING, 2024, 11 (06)