PT4AL: Using Self-supervised Pretext Tasks for Active Learning

被引:13
|
作者
Yi, John Seon Keun [1 ]
Seo, Minseok [2 ,4 ]
Park, Jongchan [3 ]
Choi, Dong-Geol [4 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] SI Analyt, Mainz, Germany
[3] Lunit Inc, Seoul, South Korea
[4] Hanbat Natl Univ, Daejeon, South Korea
来源
基金
新加坡国家研究基金会;
关键词
Active learning; Self-supervised learning; Pretext task;
D O I
10.1007/978-3-031-19809-0_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set. Code is available at https://github.com/johnsk95/PT4AL
引用
收藏
页码:596 / 612
页数:17
相关论文
共 50 条
  • [11] Self-Supervised Learning of Visual Robot Localization Using LED State Prediction as a Pretext Task
    Nava, Mirko
    Carlotti, Nicholas
    Crupi, Luca
    Palossi, Daniele
    Giusti, Alessandro
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3363 - 3370
  • [12] Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation
    Zhang, Yujia
    Po, Lai-Man
    Xu, Xuyuan
    Liu, Mengyang
    Wang, Yexin
    Ou, Weifeng
    Zhao, Yuzhi
    Yu, Wing-Yin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3380 - 3389
  • [13] Exploring complementary information of self-supervised pretext tasks for unsupervised video pre-training
    Zhou, Wei
    Hou, Yi
    Ouyang, Kewei
    Zhou, Shilin
    IET COMPUTER VISION, 2022, 16 (03) : 255 - 265
  • [14] SMM: Self-supervised Multi-Illumination Color Constancy Model with Multiple Pretext Tasks
    Feng, Ziyu
    Xu, Zheming
    Qin, Haina
    Lang, Congyan
    Li, Bing
    Xiong, Weihua
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8653 - 8661
  • [15] Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2021, 2021, : 2851 - 2855
  • [16] Deep active sampling with self-supervised learning
    Shi, Haochen
    Zhou, Hui
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (04)
  • [17] Deep active sampling with self-supervised learning
    Haochen SHI
    Hui ZHOU
    Frontiers of Computer Science, 2023, 17 (04) : 215 - 217
  • [18] Mixup Feature: A Pretext Task Self-Supervised Learning Method for Enhanced Visual Feature Learning
    Xu, Jiashu
    Stirenko, Sergii
    IEEE ACCESS, 2023, 11 : 82400 - 82409
  • [19] Single-branch self-supervised learning with hybrid tasks
    Zhao, Wenyi
    Pan, Xipeng
    Xu, Yibo
    Yang, Huihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 102
  • [20] Combining Self-supervised Learning and Active Learning for Disfluency Detection
    Wang, Shaolei
    Wang, Zhongyuan
    Che, Wanxiang
    Zhao, Sendong
    Liu, Ting
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (03)