A Novel Model Combining Transformer and Bi-LSTM for News Categorization

被引:1
作者
Liu, Yuanzhi [1 ]
He, Min [1 ]
Shi, Mengjia [1 ]
Jeon, Seunggil [2 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Peoples R China
[2] Samsung Elect, Suwon 16677, Gyeonggi Do, South Korea
关键词
Attention mechanism; bidirectional long short-term memory (Bi-LSTM); natural language processing (NLP); news categorization (NC); transformer;
D O I
10.1109/TCSS.2022.3223621
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
News categorization (NC), the aim of which is to identify distinct categories of news through analyzing the contents, has acquired substantial progress since deep learning was introduced into the natural language processing (NLP) field. As a state-of-art model, transformer's classification performance is not satisfied compared with recurrent neural network (RNN) and convolutional neural network (CNN) if it does not get pretrained. Based on the transformer model, this article proposes a novel framework that combines bidirectional long short-term memory (Bi-LSTM) network and transformer to solve this problem. In the suggested framework, the self-attention mechanism is substituted with Bi-LSTM to capture the semantic information from sentences. Meanwhile, an attention mechanism model is applied to focus on those important words and adjust their weights to solve the problem of long-distance information loss. With pooling network, the network complexity can be reduced and the main features can be highlighted by halving the dimension of the hidden state. Finally, after acquiring the hidden representation by the above structures, we utilize a contraction network to further capture the long-range associations from a text. Experiments on three large-scale corpora were performed to evaluate the suggested framework, and the results demonstrate that our model outperforms other models such as deep pyramid CNN (DPCNN), transformer.
引用
收藏
页码:4862 / 4869
页数:8
相关论文
共 29 条
  • [1] [Anonymous], 2018, P WORKSH 32 AAAI C A
  • [2] A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications
    Cai, HongYun
    Zheng, Vincent W.
    Chang, Kevin Chen-Chuan
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (09) : 1616 - 1637
  • [3] What does BERT look at? An Analysis of BERT's Attention
    Clark, Kevin
    Khandelwal, Urvashi
    Levy, Omer
    Manning, Christopher D.
    [J]. BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019, 2019, : 276 - 286
  • [4] Conneau A., 2016, KUNSTL INTELL, V26
  • [5] Attention, please! A survey of neural attention models in deep learning
    Correia, Alana de Santana
    Colombini, Esther Luna
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (08) : 6037 - 6124
  • [6] Defferrard M, 2016, ADV NEUR IN, V29
  • [7] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [8] Guo QP, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P1315
  • [9] Haj-Yahia Z, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P371
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778