KRA: K-Nearest Neighbor Retrieval Augmented Model for Text Classification

被引:0
作者
Li, Jie [1 ]
Tang, Chang [1 ]
Lei, Zhechao [2 ]
Zhang, Yirui [1 ]
Li, Xuan [1 ]
Yu, Yanhua [1 ]
Pi, Renjie [1 ]
Hu, Linmei [3 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Pilot Software Engn Sch, Sch Comp Sci, Beijing 100876, Peoples R China
[2] Beijing Normal Univ, Sch Int Chinese Language Educ, Beijing 100875, Peoples R China
[3] Beijing Inst Technol, Sch Comp Sci Technol, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
k-nearest neighbors; text augmentation; text classification;
D O I
10.3390/electronics13163237
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Text classification is a fundamental task in natural language processing (NLP). Deep-learning-based text classification methods usually have two stages: training and inference. However, the training dataset is only used in the training stage. To make full use of the training dataset in the inference stage in order to improve model performance, we propose a k-nearest neighbors retrieval augmented method (KRA) for deep-learning-based text classification models. KRA works by first constructing a storage system that stores the embeddings of the training samples during the training stage. During the inference stage, the model retrieves the top k-nearest neighbors of the testing text from the storage. Then, we use text augmentation methods to expand the retrieved neighbors, including traditional augmentation methods and a large language model (LLM)-based method. Next, the method weights the augmented neighbors based on their distances from the target text and incorporates their labels into the inference of the final results accordingly. We evaluate our KRA method on six benchmark datasets using four commonly used deep learning models: CNN, LSTM, BERT, and RoBERTa. The results demonstrate that KRA significantly improves the classification performance of these models, with an average accuracy improvement of 0.3% for BERT and up to 0.4% for RoBERTa. These improvements highlight the effectiveness and generalizability of KRA across different models and datasets, making it a valuable enhancement for a wide range of text classification tasks.
引用
收藏
页数:16
相关论文
共 33 条
[1]  
Borgeaud S, 2022, PR MACH LEARN RES
[2]  
Brown TB, 2020, ADV NEUR IN, V33
[3]  
Dai HX, 2023, Arxiv, DOI [arXiv:2302.13007, 10.48550/arXiv.2302.13007]
[4]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[5]  
Hu LM, 2024, AAAI CONF ARTIF INTE, P18234
[6]   A Survey on Knowledge Graphs: Representation, Acquisition, and Applications [J].
Ji, Shaoxiong ;
Pan, Shirui ;
Cambria, Erik ;
Marttinen, Pekka ;
Yu, Philip S. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) :494-514
[7]  
Kassner N, 2020, Arxiv, DOI arXiv:2005.00766
[8]  
Khandelwal U, 2020, Arxiv, DOI arXiv:1911.00172
[9]  
Kim Y., 2014, ACL, DOI DOI 10.3115/V1/D14-1181
[10]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001