PT4AL: Using Self-supervised Pretext Tasks for Active Learning

被引:13
|
作者
Yi, John Seon Keun [1 ]
Seo, Minseok [2 ,4 ]
Park, Jongchan [3 ]
Choi, Dong-Geol [4 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] SI Analyt, Mainz, Germany
[3] Lunit Inc, Seoul, South Korea
[4] Hanbat Natl Univ, Daejeon, South Korea
来源
基金
新加坡国家研究基金会;
关键词
Active learning; Self-supervised learning; Pretext task;
D O I
10.1007/978-3-031-19809-0_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set. Code is available at https://github.com/johnsk95/PT4AL
引用
收藏
页码:596 / 612
页数:17
相关论文
共 50 条
  • [21] Self-Supervised Reinforcement Learning for Active Object Detection
    Fang, Fen
    Liang, Wenyu
    Wu, Yan
    Xu, Qianli
    Lim, Joo-Hwee
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 10224 - 10231
  • [22] Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream Data? A Theoretical Analysis
    Teng, Jiaye
    Huang, Weiran
    He, Haowei
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [23] ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot Learning
    Rong, Yi
    Lu, Xiongbo
    Sun, Zhaoyang
    Chen, Yaxiong
    Xiong, Shengwu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9596 - 9605
  • [24] iSSL-AL: a deep active learning framework based on self-supervised learning for image classification
    Agha, Rand
    Mustafa, Ahmad M.
    Abuein, Qusai
    Neural Computing and Applications, 2024, 36 (28) : 17699 - 17713
  • [25] ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets
    Kinakh, Vitaliy
    Taran, Olga
    Voloshynovskiy, Svyatoslav
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1098 - 1106
  • [26] Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks
    Akiva, Peri
    Purri, Matthew
    Leotta, Matthew
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8193 - 8205
  • [27] Respiratory sound classification using supervised and self-supervised learning
    Lee, Sunju
    Ha, Taeyoung
    Hyon, YunKyong
    Chung, Chaeuk
    Kim, Yoonjoo
    Woo, Seong-Dae
    Lee, Song-I
    RESPIROLOGY, 2023, 28 : 160 - 161
  • [28] Diffraction denoising using self-supervised learning
    Markovic, Magdalena
    Malehmir, Reza
    Malehmir, Alireza
    GEOPHYSICAL PROSPECTING, 2023, 71 (07) : 1215 - 1225
  • [29] Uncertainty-Aware Self-Supervised Learning of Spatial Perception Tasks
    Nava, Mirko
    Paolillo, Antonio
    Guzzi, Jerome
    Gambardella, Luca Maria
    Giusti, Alessandro
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 6693 - 6700
  • [30] Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP
    Bansal, Trapit
    Gunasekaran, Karthick
    Wang, Tong
    Munkhdalai, Tsendsuren
    McCallum, Andrew
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5812 - 5824