Learning Cluster Patterns for Abstractive Summarization

被引:0
作者
Jo, Sung-Guk [1 ]
Park, Seung-Hyeok [1 ]
Kim, Jeong-Jae [2 ]
On, Byung-Won [1 ]
机构
[1] Kunsan Natl Univ, Sch Software, Gunsan Si 54150, Jeollabuk Do, South Korea
[2] Yonsei Univ, Grad Sch Cognit Sci, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Abstractive summarization; cluster; contextualized vector; BART; PEGASUS; CLASSIFIERS;
D O I
10.1109/ACCESS.2023.3346911
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, pre-trained sequence-to-sequence models such as PEGASUS and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task based on the context vectors. In our approach, we consider two clusters of salient and non-salient context vectors, using which the decoder can attend more over salient context vectors for summary generation. For this, we propose a novel cluster generator layer between the encoder and the decoder, which first generates two clusters of salient and non-salient vectors, and then normalizes and shrinks the clusters to make them apart in the latent space. Our experimental results show that the proposed model outperforms the state-of-the-art models such as BART and PEGASUS by learning these distinct cluster patterns, improving up to 2 similar to 30% in ROUGE and 0.1 similar to 0.8% in BERTScore in CNN/DailyMail and XSUM data sets.
引用
收藏
页码:146065 / 146075
页数:11
相关论文
共 39 条
  • [1] Extractive Opinion Summarization in Quantized Transformer Spaces
    Angelidis, Stefanos
    Amplayo, Reinald Kim
    Suhara, Yoshihiko
    Wang, Xiaolan
    Lapata, Mirella
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 : 277 - 293
  • [2] Benavoli A, 2017, J MACH LEARN RES, V18
  • [3] Cao M, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P6251
  • [4] Cao S, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P5008
  • [5] Celikyilmaz A., 2018, P 2018 C N AM CHAPT, V1, P1662, DOI [DOI 10.18653/V1/N18-1150, 10.18653/v1/N18-1150]
  • [6] Chen YC, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P675
  • [7] Coavoux M., 2019, P 2 WORKSHOP NEW FRO, P42
  • [8] Cui P, 2021, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, P1463
  • [9] Demsar J, 2006, J MACH LEARN RES, V7, P1
  • [10] Demsar J., 2018, Tech. Rep.