PT4AL: Using Self-supervised Pretext Tasks for Active Learning

被引:13
|
作者
Yi, John Seon Keun [1 ]
Seo, Minseok [2 ,4 ]
Park, Jongchan [3 ]
Choi, Dong-Geol [4 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] SI Analyt, Mainz, Germany
[3] Lunit Inc, Seoul, South Korea
[4] Hanbat Natl Univ, Daejeon, South Korea
来源
基金
新加坡国家研究基金会;
关键词
Active learning; Self-supervised learning; Pretext task;
D O I
10.1007/978-3-031-19809-0_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set. Code is available at https://github.com/johnsk95/PT4AL
引用
收藏
页码:596 / 612
页数:17
相关论文
共 50 条
  • [1] Pretext Tasks Selection for Multitask Self-Supervised Audio Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    Heba, Abdelwahab
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1439 - 1453
  • [2] Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging
    Albelwi, Saleh
    ENTROPY, 2022, 24 (04)
  • [3] Volumetric white matter tract segmentation with nested self-supervised learning using sequential pretext tasks
    Lu, Qi
    Li, Yuxing
    Ye, Chuyang
    MEDICAL IMAGE ANALYSIS, 2021, 72
  • [4] Self-supervised contrastive learning for heterogeneous graph based on multi-pretext tasks
    Shuai Ma
    Jian-wei Liu
    Neural Computing and Applications, 2023, 35 : 10275 - 10296
  • [5] Self-supervised contrastive learning for heterogeneous graph based on multi-pretext tasks
    Ma, Shuai
    Liu, Jian-wei
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10275 - 10296
  • [6] Self-Supervised Learning of Pretext-Invariant Representations
    Misra, Ishan
    van der Maaten, Laurens
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6706 - 6716
  • [7] Tailoring pretext tasks to improve self-supervised learning in histopathologic subtype classification of lung adenocarcinomas
    Ding, Ruiwen
    Yadav, Anil
    Rodriguez, Erika
    da Silva, Ana Cristina Araujo Lemos
    Hsu, William
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 166
  • [8] Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning
    Xie, Tianshu
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Gong, Haigang
    Liu, Ming
    IEEE ACCESS, 2023, 11 : 1708 - 1717
  • [9] Organoids Segmentation using Self-Supervised Learning: How Complex Should the Pretext Task Be?
    Haja, Asmaa
    van der Woude, Bart
    Schomaker, Lambert
    2023 10TH INTERNATIONAL CONFERENCE ON BIOMEDICAL AND BIOINFORMATICS ENGINEERING, ICBBE 2023, 2023, : 17 - 27
  • [10] Implicit sensing self-supervised learning based on graph multi-pretext tasks for traffic flow prediction
    Ali Reza Sattarzadeh
    Pubudu Nishantha Pathirana
    Marimuthu Palaniswami
    Neural Computing and Applications, 2025, 37 (2) : 1041 - 1066