Improving Transformer with Sequential Context Representations for Abstractive Text Summarization

被引:21
|
作者
Cai, Tian [1 ,2 ]
Shen, Mengjun [1 ,2 ]
Peng, Huailiang [1 ,2 ]
Jiang, Lei [1 ]
Dai, Qiong [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I | 2019年 / 11838卷
基金
美国国家科学基金会;
关键词
Transformer; Abstractive summarization;
D O I
10.1007/978-3-030-32233-5_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent dominant approaches for abstractive text summarization are mainly RNN-based encoder-decoder framework, these methods usually suffer from the poor semantic representations for long sequences. In this paper, we propose a new abstractive summarization model, called RC-Transformer (RCT). The model is not only capable of learning longterm dependencies, but also addresses the inherent shortcoming of Transformer on insensitivity to word order information. We extend the Transformer with an additional RNN-based encoder to capture the sequential context representations. In order to extract salient information effectively, we further construct a convolution module to filter the sequential context with local importance. The experimental results on Gigaword and DUC-2004 datasets show that our proposed model achieves the state-of-the-art performance, even without introducing external information. In addition, our model also owns an advantage in speed over the RNN-based models.
引用
收藏
页码:512 / 524
页数:13
相关论文
共 50 条
  • [21] Improving Abstractive Dialogue Summarization Using Keyword Extraction
    Yoo, Chongjae
    Lee, Hwanhee
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [22] Extractive Elementary Discourse Units for Improving Abstractive Summarization
    Xiong, Ye
    Racharak, Teeradaj
    Minh Le Nguyen
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2675 - 2679
  • [23] Deep reinforcement and transfer learning for abstractive text summarization: A review
    Alomari, Ayham
    Idris, Norisma
    Sabri, Aznul Qalid Md
    Alsmadi, Izzat
    COMPUTER SPEECH AND LANGUAGE, 2022, 71
  • [24] Domain-Aware Abstractive Text Summarization for Medical Documents
    Gigioli, Paul
    Sagar, Nikhita
    Voyles, Joseph
    Rao, Anand
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 1155 - 1162
  • [25] Research on abstractive text summarization based on triplet information guidance
    Zhang, Yunzuo
    Li, Yi
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (12): : 3677 - 3685
  • [26] Domain-Aware Abstractive Text Summarization for Medical Documents
    Gigioli, Paul
    Sagar, Nikhita
    Voyles, Joseph
    Rao, Anand
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2338 - 2343
  • [27] Faithful Abstractive Summarization via Fact-aware Consistency-constrained Transformer
    Lyu, Yuanjie
    Zhu, Chen
    Xu, Tong
    Yin, Zikai
    Chen, Enhong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1410 - 1419
  • [28] A Study of Abstractive Summarization Using Semantic Representations and Discourse Level Information
    Valderrama Vilca, Gregory Cesar
    Sobrevilla Cabezudo, Marco Antonio
    TEXT, SPEECH, AND DIALOGUE, TSD 2017, 2017, 10415 : 482 - 490
  • [29] A Faster Method For Generating Chinese Text Summaries-Combining Extractive Summarization And Abstractive Summarization
    Yang, Wenchuan
    Gu, Tianyu
    Sui, Runqi
    2022 5TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING, MLNLP 2022, 2022, : 54 - 58
  • [30] Social-sum-Mal: A Dataset for Abstractive Text Summarization in Malayalam
    Raj, M. rahul
    Pankaj, Dhanya S.
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (11)