CLUR: Uncertainty Estimation for Few-Shot Text Classification with Contrastive Learning

被引:3
作者
He, Jianfeng [1 ]
Zhang, Xuchao [2 ]
Lei, Shuo [1 ]
Alhamadani, Abdulaziz [1 ]
Chen, Fanglan [1 ]
Xiao, Bei [3 ]
Lu, Chang-Tien [1 ]
机构
[1] Virginia Tech, Falls Church, VA 22043 USA
[2] Microsoft, Redmond, WA USA
[3] Amer Univ, Washington, DC USA
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
基金
美国国家科学基金会;
关键词
Uncertainty estimation; few-shot; pseudo labels; contrastive learning;
D O I
10.1145/3580305.3599276
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Few-shot text classification has extensive application where the sample collection is expensive or complicated. When the penalty for classification errors is high, such as early threat event detection with scarce data, we expect to know "whether we should trust the classification results or reexamine them." This paper investigates the Uncertainty Estimation for Few-shot Text Classification (UEFTC), an unexplored research area. Given limited samples, a UEFTC model predicts an uncertainty score for a classification result, which is the likelihood that the classification result is false. However, many traditional uncertainty estimation models in text classification are unsuitable for implementing a UEFTC model. These models require numerous training samples, whereas the few-shot setting in UEFTC only provides a few or just one support sample for each class in an episode. We propose Contrastive Learning from Uncertainty Relations (CLUR) to address UEFTC. CLUR can be trained with only one support sample for each class with the help of pseudo uncertainty scores. Unlike previous works that manually set the pseudo uncertainty scores, CLUR self-adaptively learns them using our proposed uncertainty relations. Specifically, we explore four model structures in CLUR to investigate the performance of three common-used contrastive learning components in UEFTC and find that two of the components are effective. Experiment results prove that CLUR outperforms six baselines on four datasets, including an improvement of 4.52% AUPR on an RCV1 dataset in a 5-way 1-shot setting. Our code and data split for UEFTC are in https: //github.com/he159ok/CLUR_UncertaintyEst_FewShot_TextCls.
引用
收藏
页码:698 / 710
页数:13
相关论文
共 68 条
[1]  
Antoran J, 2020, Arxiv, DOI arXiv:2006.08437
[2]  
Bao Y., 2020, INT C LEARN REPR, P1
[3]  
berenstein david, 2023, A public medical domain dataset.
[4]  
Caron M, 2021, Arxiv, DOI arXiv:2006.09882
[5]  
Charpentier Bertrand, 2022, Natural Posterior Network: Deep Bayesian Uncertainty for Exponential Family Distributions
[6]  
Charpentier Bertrand, 2020, ADV NEUR IN, V33
[7]  
Chen BX, 2022, Arxiv, DOI arXiv:2202.07136
[8]  
Chen T, 2020, PR MACH LEARN RES, V119
[9]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297
[10]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753