Enhancing abstractive summarization of implicit datasets with contrastive attention

被引:1
|
作者
Kwon S. [1 ]
Lee Y. [2 ]
机构
[1] Department of Data Science, Seoul National University of Science and Technology, 232, Gongneung-ro, Nowon-gu, Seoul
[2] Department of Industrial Engineering, Seoul National University of Science and Technology, 232, Gongneung-ro, Nowon-gu, Seoul
基金
新加坡国家研究基金会;
关键词
Abstractive summarization; Contrastive attention; Implicit dataset; Text summarization;
D O I
10.1007/s00521-024-09864-y
中图分类号
学科分类号
摘要
It is important for abstractive summarization models to understand the important parts of the original document and create a natural summary accordingly. Recently, studies have been conducted to incorporate important parts of the original document during learning and have shown good performance. However, these studies are effective for explicit datasets but not implicit datasets which are relatively more abstract. This study addresses the challenge of summarizing implicit datasets, which have a lower deviation in the significance of important sentences compared to explicit datasets. A multi-task learning approach that reflects information about salient and incidental objects during the learning process was proposed. This was achieved by adding a contrastive objective to the fine-tuning process of the encoder-decoder language model. The salient and incidental parts were selected based on the ROUGE-L F1 score and their relationships were learned through triplet loss. The proposed method was evaluated using five benchmark summarization datasets, including two explicit and three implicit. The experimental results showed a greater improvement in implicit datasets, particularly for the highly abstractive XSum dataset, compared to the vanilla fine-tuning method in both the BART-base and T5-small models. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:15337 / 15351
页数:14
相关论文
共 50 条
  • [11] Abstractive Summarization by Neural Attention Model with Document Content Memory
    Choi, Yunseok
    Kim, Dahae
    Lee, Jee-Hyong
    PROCEEDINGS OF THE 2018 CONFERENCE ON RESEARCH IN ADAPTIVE AND CONVERGENT SYSTEMS (RACS 2018), 2018, : 11 - 16
  • [12] Abstractive Document Summarization via Neural Model with Joint Attention
    Hou, Liwei
    Hu, Po
    Bei, Chao
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2017, 2018, 10619 : 329 - 338
  • [13] Enhancing Textual Representation for Abstractive Summarization: Leveraging Masked Decoder
    Jia, Ruipeng
    Cao, Yannan
    Fang, Fang
    Li, Jinpeng
    Liu, Yanbing
    Yin, Pengfei
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [14] Enhancing abstractive summarization of scientific papers using structure information
    Bao, Tong
    Zhang, Heng
    Zhang, Chengzhi
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [15] Neural Attention Model for Abstractive Text Summarization Using Linguistic Feature Space
    Dilawari, Aniqa
    Khan, Muhammad Usman Ghani
    Saleem, Summra
    Zahoor-Ur-Rehman
    Shaikh, Fatema Sabeen
    IEEE ACCESS, 2023, 11 : 23557 - 23564
  • [16] An Abstractive Summarization Model Based on Joint-Attention Mechanism and a Priori Knowledge
    Li, Yuanyuan
    Huang, Yuan
    Huang, Weijian
    Yu, Junhao
    Huang, Zheng
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [17] Cl2sum: abstractive summarization via contrastive prompt constructed by LLMs hallucination
    Huang, Xiang
    Nong, Qiong
    Wang, Xiaobo
    Zhang, Hongcheng
    Du, Kunpeng
    Yin, Chunlin
    Yang, Li
    Yan, Bin
    Zhang, Xuan
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (03)
  • [18] Summary-aware attention for social media short text abstractive summarization
    Wang, Qianlong
    Ren, Jiangtao
    NEUROCOMPUTING, 2021, 425 : 290 - 299
  • [19] KAAS: A Keyword-Aware Attention Abstractive Summarization Model for Scientific Articles
    Li, Shuaimin
    Xu, Jungang
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT III, 2022, : 263 - 271
  • [20] Enhancing Abstractive Summarization with Extracted Knowledge Graphs and Multi-Source Transformers
    Chen, Tong
    Wang, Xuewei
    Yue, Tianwei
    Bai, Xiaoyu
    Le, Cindy X. X.
    Wang, Wenping
    APPLIED SCIENCES-BASEL, 2023, 13 (13):