DiffuSum: Generation Enhanced Extractive Summarization with Diffusion

被引:0
作者
Zhang, Haopeng [1 ]
Liu, Xiao [1 ]
Zhang, Jiawei [1 ]
机构
[1] Univ Calif Davis, IFM Lab, Dept Comp Sci, Davis, CA 95616 USA
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023) | 2023年
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Extractive summarization aims to form a summary by directly extracting sentences from the source document. Existing works mostly formulate it as a sequence labeling problem by making individual sentence label predictions. This paper proposes DiffuSum, a novel paradigm for extractive summarization, by directly generating the desired summary sentence representations with diffusion models and extracting sentences based on sentence representation matching. In addition, DiffuSum jointly optimizes a contrastive sentence encoder with a matching loss for sentence representation alignment and a multi-class contrastive loss for representation diversity. Experimental results show that DiffuSum achieves the new state-of-the-art extractive results on CNN/DailyMail with ROUGE scores of 44.83/22.56/40.56. Experiments on the other two datasets with different summary lengths also demonstrate the effectiveness of DiffuSum. The strong performance of our framework shows the great potential of adapting generative models for extractive summarization. To encourage more following work in the future, we have released our codes at https://github.com/hpzhang94/DiffuSum
引用
收藏
页码:13089 / 13100
页数:12
相关论文
共 44 条
  • [1] An Chenxin, 2023, COLO CONTRASTIVE LEA
  • [2] Athiwaratkun Ben, 2020, ARXIV
  • [3] Bae Sanghwan, 2019, ARXIV
  • [4] Cheng Jianpeng, 2016, ARXIV
  • [5] Cohan Arman, 2018, ARXIV
  • [6] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [7] Dieleman Sander, 2022, ARXIV
  • [8] Du Zhengxiao, 2021, ARXIV
  • [9] Egonmwan Elozino, 2019, P 3 WORKSH NEUR GEN, P70, DOI DOI 10.18653/V1/D19-5607
  • [10] Gong Shansan, 2022, ARXIV