Contrastive Open-Set Active Learning-Based Sample Selection for Image Classification

被引:2
作者
Yan, Zizheng [1 ,2 ]
Ruan, Delian [3 ]
Wu, Yushuang [1 ,2 ]
Huang, Junshi [3 ]
Chai, Zhenhua [3 ]
Han, Xiaoguang [1 ,2 ]
Cui, Shuguang [1 ,2 ]
Li, Guanbin [4 ]
机构
[1] Chinese Univ Hong Kong Shenzhen, Shenzhen Future Network Intelligence Inst, Sch Sci & Engn, Shenzhen 518172, Peoples R China
[2] Chinese Univ Hong Kong Shenzhen, Guangdong Prov Key Lab Future Networks Intelligenc, Shenzhen 518172, Peoples R China
[3] Meituan, Beijing 100102, Peoples R China
[4] Sun Yat Sen Univ, Res Inst Sun Yat Sen Univ Shenzhen, Sch Comp Sci & Engn, Guangzhou 510008, Peoples R China
基金
中国国家自然科学基金;
关键词
Detectors; Training; Uncertainty; Labeling; Representation learning; Clustering methods; Standards; Image recognition; active learning; contrastive learning;
D O I
10.1109/TIP.2024.3451928
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we address a complex but practical scenario in Active Learning (AL) known as open-set AL, where the unlabeled data consists of both in-distribution (ID) and out-of-distribution (OOD) samples. Standard AL methods will fail in this scenario as OOD samples are highly likely to be regarded as uncertain samples, leading to their selection and wasting of the budget. Existing methods focus on selecting the highly likely ID samples, which tend to be easy and less informative. To this end, we introduce two criteria, namely contrastive confidence and historical divergence, which measure the possibility of being ID and the hardness of a sample, respectively. By balancing the two proposed criteria, highly informative ID samples can be selected as much as possible. Furthermore, unlike previous methods that require additional neural networks to detect the OOD samples, we propose a contrastive clustering framework that endows the classifier with the ability to identify the OOD samples and further enhances the network's representation learning. The experimental results demonstrate that the proposed method achieves state-of-the-art performance on several benchmark datasets.
引用
收藏
页码:5525 / 5537
页数:13
相关论文
共 76 条
[1]   Contextual Diversity for Active Learning [J].
Agarwal, Sharat ;
Arora, Himanshu ;
Anand, Saket ;
Arora, Chetan .
COMPUTER VISION - ECCV 2020, PT XVI, 2020, 12361 :137-153
[2]   Variable Few Shot Class Incremental and OpenWorld Learning [J].
Ahmad, Touqeer ;
Dhamija, Akshay Raj ;
Jafarzadeh, Mohsen ;
Cruz, Steve ;
Rabinowitz, Ryan ;
Li, Chunchun ;
Boult, Terrance E. .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :3687-3698
[3]  
[Anonymous], 1994, P 11 INT C MACH LEAR
[4]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[5]  
Biyik E, 2019, Arxiv, DOI arXiv:1906.07975
[6]   Sequential Graph Convolutional Network for Active Learning [J].
Caramalau, Razvan ;
Bhattarai, Binod ;
Kim, Tae-Kyun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9578-9587
[7]  
Caron M, 2020, ADV NEUR IN, V33
[8]  
Chen GY, 2021, Arxiv, DOI arXiv:2103.00953
[9]   Learning Open Set Network with Discriminative Reciprocal Points [J].
Chen, Guangyao ;
Qiao, Limeng ;
Shi, Yemin ;
Peng, Peixi ;
Li, Jia ;
Huang, Tiejun ;
Pu, Shiliang ;
Tian, Yonghong .
COMPUTER VISION - ECCV 2020, PT III, 2020, 12348 :507-522
[10]  
Chen T., 2020, INT C MACH LEARN PML, P1597