Novel multi-domain attention for abstractive summarisation

被引:7
|
作者
Qu, Chunxia [1 ]
Lu, Ling [1 ]
Wang, Aijuan [1 ]
Yang, Wu [1 ]
Chen, Yinong [2 ]
机构
[1] Chongqing Univ Technol, Coll Comp Sci & Engn, Chongqing 400050, Peoples R China
[2] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ USA
关键词
abstracting; abstractive summarisation; attention mechanism; Bi-LSTM; convolutional neural nets; coverage mechanism; pointer network; recurrent neural nets; text analysis; word processing;
D O I
10.1049/cit2.12117
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The existing abstractive text summarisation models only consider the word sequence correlations between the source document and the reference summary, and the summary generated by models lacks the cover of the subject of source document due to models' small perspective. In order to make up these disadvantages, a multi-domain attention pointer (MDA-Pointer) abstractive summarisation model is proposed in this work. First, the model uses bidirectional long short-term memory to encode, respectively, the word and sentence sequence of source document for obtaining the semantic representations at word and sentence level. Furthermore, the multi-domain attention mechanism between the semantic representations and the summary word is established, and the proposed model can generate summary words under the proposed attention mechanism based on the words and sentences. Then, the words are extracted from the vocabulary or the original word sequences through the pointer network to form the summary, and the coverage mechanism is introduced, respectively, into word and sentence level to reduce the redundancy of summary content. Finally, experiment validation is conducted on the convolutional neural network/Daily Mail dataset. ROUGE evaluation indexes of the model without and with the coverage mechanism are improved respectively, and the results verify the validation of model proposed by this paper.
引用
收藏
页码:796 / 806
页数:11
相关论文
共 50 条
  • [1] Multi-domain gate and interactive dual attention for multi-domain dialogue state tracking
    Jia, Xu
    Zhang, Ruochen
    Peng, Min
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [2] Multi-domain gate and interactive dual attention for multi-domain dialogue state tracking
    Jia, Xu
    Zhang, Ruochen
    Peng, Min
    Knowledge-Based Systems, 2024, 286
  • [3] Domain attention model for multi-domain sentiment classification
    Yuan, Zhigang
    Wu, Sixing
    Wu, Fangzhao
    Liu, Junxin
    Huang, Yongfeng
    KNOWLEDGE-BASED SYSTEMS, 2018, 155 : 1 - 10
  • [4] Novel SDN Multi-domain Architecture
    Helebrandt, Pavol
    Kotuliak, Ivan
    12TH IEEE INTERNATIONAL CONFERENCE ON EMERGING ELEARNING TECHNOLOGIES AND APPLICATIONS (ICETA 2014), 2014, : 139 - 143
  • [5] Multi-domain Attention Fusion Network For Language Recognition
    Ju M.
    Xu Y.
    Ke D.
    Su K.
    SN Computer Science, 4 (1)
  • [6] Multi-Domain Dialogue State Tracking with Disentangled Domain-Slot Attention
    Yang, Longfei
    Li, Jiyi
    Li, Sheng
    Shinozaki, Takahiro
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4928 - 4938
  • [7] Collaborative attention neural network for multi-domain sentiment classification
    Yue, Chunyi
    Cao, Hanqiang
    Xu, Guoping
    Dong, Youli
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3174 - 3188
  • [8] Collaborative attention neural network for multi-domain sentiment classification
    Chunyi Yue
    Hanqiang Cao
    Guoping Xu
    Youli Dong
    Applied Intelligence, 2021, 51 : 3174 - 3188
  • [9] Multi-Domain Sentiment Classification Based on Domain-Aware Embedding and Attention
    Cai, Yitao
    Wan, Xiaojun
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4904 - 4910
  • [10] Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling
    Gong, Hongyu
    Tang, Yun
    Pino, Juan Miguel
    Li, Xian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34