PT4AL: Using Self-supervised Pretext Tasks for Active Learning

被引:13
|
作者
Yi, John Seon Keun [1 ]
Seo, Minseok [2 ,4 ]
Park, Jongchan [3 ]
Choi, Dong-Geol [4 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] SI Analyt, Mainz, Germany
[3] Lunit Inc, Seoul, South Korea
[4] Hanbat Natl Univ, Daejeon, South Korea
来源
基金
新加坡国家研究基金会;
关键词
Active learning; Self-supervised learning; Pretext task;
D O I
10.1007/978-3-031-19809-0_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set. Code is available at https://github.com/johnsk95/PT4AL
引用
收藏
页码:596 / 612
页数:17
相关论文
共 50 条
  • [41] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [42] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [44] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [45] Self-Supervised Learning for Electroencephalography
    Rafiei, Mohammad H.
    Gauthier, Lynne V.
    Adeli, Hojjat
    Takabi, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1457 - 1471
  • [46] SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
    Chen, Yi-Syuan
    Song, Yun-Zhu
    Yeo, Cheng Yu
    Liu, Bei
    Fu, Jianlong
    Shuai, Hong-Han
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15384 - 15396
  • [47] MSLTE: multiple self-supervised learning tasks for enhancing EEG emotion recognition
    Li, Guangqiang
    Chen, Ning
    Niu, Yixiang
    Xu, Zhangyong
    Dong, Yuxuan
    Jin, Jing
    Zhu, Hongqin
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (02)
  • [48] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [49] Weather Recognition Using Self-supervised Deep Learning
    Acuna-Escobar, Diego
    Intriago-Pazmino, Monserrate
    Ibarra-Fiallo, Julio
    SMART TECHNOLOGIES, SYSTEMS AND APPLICATIONS, SMARTTECH-IC 2021, 2022, 1532 : 161 - 174
  • [50] Self-supervised Representation Learning Using 360° Data
    Li, Junnan
    Liu, Jianquan
    Wong, Yongkang
    Nishimura, Shoji
    Kankanhalli, Mohan S.
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 998 - 1006