Abstractive Summarization Model with a Feature-Enhanced Seq2Seq Structure

被引:0
作者
Hao, Zepeng [1 ]
Ji, Jingzhou [1 ]
Xie, Tao [1 ]
Xue, Bin [1 ]
机构
[1] Natl Univ Def Technol, Sch Informat & Commun, Xian, Peoples R China
来源
2020 5TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2020) | 2020年
关键词
abstractive summarization; feature-enhanced Seq2Seq structure; memory network; non-local network;
D O I
10.1109/acirs49895.2020.9162627
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Abstractive text summarization task is mainly through deep learning method to summarize one or more documents to produce a concise summary that can express the main meaning of the document. Most methods are mainly based on the traditional Seq2Seq structure, but the traditional Seq2Seq structure has limited ability to capture and store long-term features and global features, resulting in a lack of information in the generated summary. In our paper, we put forward a new abstractive summarization model based on feature-enhanced Seq2Seq structure for single document summarization task. This model utilizes two types of feature capture networks to improve the encoder and decoder in traditional Seq2Seq structure, to enhance the model's ability to capture and store long-term features and global features, so that the generated summary more informative and more fluency. Finally, we verified the model we proposed on the CNN/DailyMail dataset. Experimental results demonstrate that the model proposed in this paper is more effective than the baseline model, and has improved by 5.6%, 5.3%, 6.2% on the three metrics R-1, R-2, and R-L.
引用
收藏
页码:163 / 167
页数:5
相关论文
共 11 条
  • [11] Non-local Neural Networks
    Wang, Xiaolong
    Girshick, Ross
    Gupta, Abhinav
    He, Kaiming
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7794 - 7803