Research Paper Classification and Recommendation System based-on Fine-Tuning BERT

被引:0
作者
Biswas, Dipto [1 ]
Gil, Joon-Min [2 ]
机构
[1] Daegu Catholic Univ, Grad Sch, Dept Comp Software Engn, Gyongsan, South Korea
[2] Sch Comp Software Engn, Gyongsan, South Korea
来源
2023 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE, IRI | 2023年
基金
新加坡国家研究基金会;
关键词
NLP; CNN; BiLSTM; BERT; Fine-tuning Model;
D O I
10.1109/IRI58017.2023.00058
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we compare the performance of two popular NLP models, pre-train fine-tuned BERT and BiLSTM with combined CNN, in terms of the classification and recommendation tasks of research papers. We conduct the performance evaluation of these two models with research journal benchmark dataset. Performance results show that the pre-train fine-tuned BERT model is superior to CNN-BiLSTM combined model in terms of classification performance.
引用
收藏
页码:295 / 296
页数:2
相关论文
共 4 条
[1]   Research on a Real-Time Monitoring Method for the Wear State of a Tool Based on a Convolutional Bidirectional LSTM Model [J].
Chen, Qipeng ;
Xie, Qingsheng ;
Yuan, Qingni ;
Huang, Haisong ;
Li, Yiting .
SYMMETRY-BASEL, 2019, 11 (10)
[2]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[3]  
Kingma D P., 2014, ADAM METHOD STOCHAST
[4]  
Mikolov T., 2013, P ANN C NEUR INF PRO, P1