Deep Differential Amplifier for Extractive Summarization

被引:0
作者
Jia, Ruipeng [1 ,2 ]
Cao, Yanan [1 ,2 ]
Fang, Fang [1 ,2 ]
Zhou, Yuchen [1 ]
Fang, Zheng [1 ]
Liu, Yanbing [1 ,2 ]
Wang, Shi [3 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
来源
59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021) | 2021年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For sentence-level extractive summarization, there is a disproportionate ratio of selected and unselected sentences, leading to flatting the summary features when optimizing the classification. The imbalanced sentence classification in extractive summarization is inherent, which can't be addressed by data sampling or data augmentation algorithms easily. In order to address this problem, we innovatively consider the single-document extractive summarization as a rebalance problem and present a deep differential amplifier framework to enhance the features of summary sentences. Specifically, we calculate and amplify the semantic difference between each sentence and other sentences, and apply the residual unit to deepen the differential amplifier architecture. Furthermore, the corresponding objective loss of the minority class is boosted by a weighted cross-entropy. In this way, our model pays more attention to the pivotal information of one sentence, that is different from previous approaches which model all informative context in the source document. Experimental results on two benchmark datasets show that our summarizer performs competitively against state-of-the-art methods. Our source code will be available on Github.
引用
收藏
页码:366 / 376
页数:11
相关论文
共 58 条
  • [21] DistilSum: Distilling the Knowledge for Extractive Summarization
    Jia, Ruipeng
    Cao, Yanan
    Shi, Haichao
    Fang, Fang
    Liu, Yanbing
    Tan, Jianlong
    [J]. CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 2069 - 2072
  • [22] Jia Ruipeng, EMNLP, P3622
  • [23] Kedzie C, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P1818
  • [24] Lebanoff L, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2175
  • [25] Lin C., 2004, P TEXT SUMM BRANCH O, P74
  • [26] Liu K. A. F., 2018, P ACL 2018 STUD RES, P105, DOI DOI 10.18653/V1/P18-3015
  • [27] Liu Y, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P1745
  • [28] Liu Y, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3730
  • [29] Luo L, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3033
  • [30] The Stanford CoreNLP Natural Language Processing Toolkit
    Manning, Christopher D.
    Surdeanu, Mihai
    Bauer, John
    Finkel, Jenny
    Bethard, Steven J.
    McClosky, David
    [J]. PROCEEDINGS OF 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, 2014, : 55 - 60