Multi-MCCR: Multiple models regularization for semi-supervised text classification with few labels

被引:6
作者
Zhou, Nai [1 ]
Yao, Nianmin [1 ,3 ]
Li, Qibin [1 ]
Zhao, Jian [2 ,3 ]
Zhang, Yanan [4 ]
机构
[1] Dalian Univ Technol, Sch Comp Sci, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Sch Automot Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Ningbo Inst, Ningbo 315016, Peoples R China
[4] Automot Data China Co Ltd, Tianjin 300300, Peoples R China
基金
国家重点研发计划;
关键词
Semi-supervised learning; Multiple models contrast learning; Consistent regularization; Text classification;
D O I
10.1016/j.knosys.2023.110588
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised learning has achieved impressive results and is commonly applied in text classifica-tions. However, in situations where labeled texts are exceedingly limited, neural networks are prone to over-fitting due to the non-negligible inconsistency between model training and inference caused by dropout mechanisms that randomly mask some neurons. To alleviate this inconsistency problem, we propose a simple Multiple Models Contrast learning based on Consistent Regularization, named Multi-MCCR, which consists of multiple models with the same structure and a C-BiKL loss strategy. Specifically, one sample first goes through multiple identical models to obtain multiple different output distributions, which enriches the sample output distributions and provides conditions for subsequent consistency approximation. Then, the C-BiKL loss strategy is proposed to minimize the combination of the bidirectional Kullback--Leibler (BiKL) divergence between the above multiple output distributions and the Cross-Entropy loss on labeled data, which provides consistency constraints (BiKL) for the model and simultaneously ensures correct classification (Cross-Entropy). Through the above setting of multi-model contrast learning, the inconsistency caused by the randomness of dropout between model training and inference is alleviated, thereby avoiding over-fitting and improving the classification ability in scenarios with limited labeled samples. We conducted experiments on six widely-used text classification datasets, including sentiment analysis, topic categorization, and reviews classification, and the experimental results show that our method is universally effective in semi-supervised text classification with limited labeled texts. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:11
相关论文
共 40 条
[1]  
[Anonymous], Complete citation text would be in footnote
[2]  
Bachman P, 2014, ADV NEUR IN, V27
[3]  
Bo Pang, 2008, Foundations and Trends in Information Retrieval, V2, P1, DOI 10.1561/1500000001
[4]  
Brown TB, 2020, ADV NEUR IN, V33
[5]  
Chang M.-W., 2008, Aaai, P830
[6]  
Chen JA, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P2147
[7]  
Chen MD, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P215
[8]  
Chuang YS, 2022, NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, P4207
[9]  
Croce D, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P2114
[10]  
Dai A.M., 2015, 20 9 C NEURAL INFORM