Text Classification Based on Neural Network Fusion

被引:2
作者
Kim, Deageon [1 ]
机构
[1] Dongseo Univ, Architectural Engn, 47 Jurye Ro, Busan 47011, South Korea
来源
TEHNICKI GLASNIK-TECHNICAL JOURNAL | 2023年 / 17卷 / 03期
基金
新加坡国家研究基金会;
关键词
attention mechanism; deep learning; neural network; presentation; text classification;
D O I
10.31803/tg-20221228154330
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The goal of text classification is to identify the category to which the text belongs. Text categorization is widely used in email detection, sentiment analysis, topic marking and other fields. However, good text representation is the point to improve the capability of NLP tasks. Traditional text representation adopts bag-of-words model or vector space model, which loses the context information of the text and faces the problems of high latitude and high sparsity,. In recent years, with the increase of data and the improvement of computing performance, the use of deep learning technology to represent and classify texts has attracted great attention. Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and RNN with attention mechanism are used to represent the text, and then to classify the text and other NLP tasks, all of which have better performance than the traditional methods. In this paper, we design two sentence-level models based on the deep network and the details are as follows: (1) Text representation and classification model based on bidirectional RNN and CNN (BRCNN). BRCNN's input is the word vector corresponding to each word in the sentence; after using RNN to extract word order information in sentences, CNN is used to extract higher-level features of sentences. After convolution, the maximum pool operation is used to obtain sentence vectors. At last, softmax classifier is used for classification. RNN can capture the word order information in sentences, while CNN can extract useful features. Experiments on eight text classification tasks show that BRCNN model can get better text feature representation, and the classification accuracy rate is equal to or higher than that of the prior art. (2) Attention mechanism and CNN (ACNN) model uses the RNN with attention mechanism to obtain the context vector; Then CNN is used to extract more advanced feature information. The maximum pool operation is adopted to obtain a sentence vector; At last, the softmax classifier is used to classify the text. Experiments on eight text classification benchmark data sets show that ACNN improves the stability of model convergence, and can converge to an optimal or local optimal solution better than BRCNN.
引用
收藏
页码:359 / 366
页数:8
相关论文
共 28 条
[11]   Attention-Based Recurrent Neural Network for Plant Disease Classification [J].
Lee, Sue Han ;
Goeau, Herve ;
Bonnet, Pierre ;
Joly, Alexis .
FRONTIERS IN PLANT SCIENCE, 2020, 11
[12]   A Convolutional Neural Network (CNN) Based Approach for the Recognition and Evaluation of Classroom Teaching Behavior [J].
Li, Guang ;
Liu, Fangfang ;
Wang, Yuping ;
Guo, Yongde ;
Xiao, Liang ;
Zhu, Linkai .
SCIENTIFIC PROGRAMMING, 2021, 2021
[13]   A Method of Amino Acid Terahertz Spectrum Recognition Based on the Convolutional Neural Network and Bidirectional Gated Recurrent Network Model [J].
Li, Tao ;
Xu, Yuanyuan ;
Luo, Jiliang ;
He, Jianan ;
Lin, Shiming .
SCIENTIFIC PROGRAMMING, 2021, 2021
[14]   A three-branch 3D convolutional neural network for EEG-based different hand movement stages classification [J].
Liu, Tianjun ;
Yang, Deling .
SCIENTIFIC REPORTS, 2021, 11 (01)
[15]   Attention mechanism enhancement algorithm based on cycle consistent generative adversarial networks for single image dehazing [J].
Liu, Yan ;
Al-Shehari, Hassan ;
Zhang, Hongying .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 83
[16]   Tree-RNN: Tree structural recurrent neural network for network traffic classification [J].
Ren, Xinming ;
Gu, Huaxi ;
Wei, Wenting .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 167
[17]   One-Dimensional Deep Convolutional Neural Network for Mineral Classification from Raman Spectroscopy [J].
Sang, Xiancheng ;
Zhou, Ri-gui ;
Li, Yaochong ;
Xiong, Shengjun .
NEURAL PROCESSING LETTERS, 2022, 54 (01) :677-690
[18]  
Shin Byeongmu, 2020, [Asia-pacific Journal of Convergent Research Interchange, 아시아태평양융합연구교류논문지], V6, P173, DOI 10.47116/apjcri.2020.11.15
[19]   IMPROVING RNN TRANSDUCER MODELING FOR SMALL-FOOTPRINT KEYWORD SPOTTING [J].
Tian, Yao ;
Yao, Haitao ;
Cai, Meng ;
Liu, Yaming ;
Ma, Zejun .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :5624-5628
[20]  
Trikha V., 2020, ASIA PACIFIC J CONVE, V9, P169, DOI [10.47116/apjcri.2020.09.15, DOI 10.47116/APJCRI.2020.09.15]