PT4AL: Using Self-supervised Pretext Tasks for Active Learning

被引:13
|
作者
Yi, John Seon Keun [1 ]
Seo, Minseok [2 ,4 ]
Park, Jongchan [3 ]
Choi, Dong-Geol [4 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] SI Analyt, Mainz, Germany
[3] Lunit Inc, Seoul, South Korea
[4] Hanbat Natl Univ, Daejeon, South Korea
来源
基金
新加坡国家研究基金会;
关键词
Active learning; Self-supervised learning; Pretext task;
D O I
10.1007/978-3-031-19809-0_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set. Code is available at https://github.com/johnsk95/PT4AL
引用
收藏
页码:596 / 612
页数:17
相关论文
共 50 条
  • [31] Skin lesion classification based on hybrid self-supervised pretext task
    Yang, Dedong
    Zhang, Jianwen
    Li, Yangyang
    Ling, Zhiquan
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (02)
  • [32] Interpreting pretext tasks for active learning: a reinforcement learning approach
    Kim, Dongjoo
    Lee, Minsik
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [33] Reducing Label Effort: Self-Supervised meets Active Learning
    Bengar, Javad Zolfaghari
    van de Weijer, Joost
    Twardowski, Bartlomiej
    Raducanu, Bogdan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1631 - 1639
  • [34] Self-Supervised Image Quality Assessment through Active Learning
    Yu, Yunchao
    Sang, Qingbing
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 315 - 319
  • [35] Self-supervised pretext task collaborative multi-view contrastive learning for video action recognition
    Bi, Shuai
    Hu, Zhengping
    Zhao, Mengyao
    Zhang, Hehao
    Di, Jirui
    Sun, Zhe
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (07) : 3775 - 3782
  • [36] GCLR: A self-supervised representation learning pretext task for glomerular filtration barrier segmentation in TEM images
    Lin, Guoyu
    Zhang, Zhentai
    Long, Kaixing
    Zhang, Yiwen
    Lu, Yanmeng
    Geng, Jian
    Zhou, Zhitao
    Feng, Qianjin
    Lu, Lijun
    Cao, Lei
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 146
  • [37] Gated Self-supervised Learning for Improving Supervised Learning
    Fuadi, Erland Hillman
    Ruslim, Aristo Renaldo
    Wardhana, Putu Wahyu Kusuma
    Yudistira, Novanto
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 611 - 615
  • [38] Self-supervised pretext task collaborative multi-view contrastive learning for video action recognition
    Shuai Bi
    Zhengping Hu
    Mengyao Zhao
    Hehao Zhang
    Jirui Di
    Zhe Sun
    Signal, Image and Video Processing, 2023, 17 : 3775 - 3782
  • [39] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [40] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216