Mining Both Commonality and Specificity From Multiple Documents for Multi-Document Summarization

被引:3
作者
Ma, Bing [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
关键词
Vectors; Task analysis; Feature extraction; Natural languages; Clustering algorithms; Symmetric matrices; Standards; Class tree; commonality and specificity; hierarchical clustering of documents; multi-document summarization; pre-trained embedding representation; TEXT; FRAMEWORK;
D O I
10.1109/ACCESS.2024.3388493
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The multi-document summarization task requires the designed summarizer to generate a short text that covers the important information of original multiple documents and satisfies content diversity. To fulfill the dual requirements of coverage and diversity in multi-document summarization, this study introduces a novel method. Initially, a class tree is constructed through hierarchical clustering of documents. Subsequently, a sentence selection method based on class tree is proposed for generating a summary. Specifically, a top-down traversal is performed on the class tree, during which sentences are selected from each node based on their similarity to the centroid of the documents within the node and their dissimilarity to the centroid of documents not belonging to the node. Sentences selected from the root node reflect the commonality of all document, and sentences selected from the sub nodes reflect the distinct specificity of the respective subclasses. Experimental results on standard text summarization datasets DUC'2002, DUC'2003, and DUC'2004 demonstrate that the proposed method significantly outperforms the variant method that considers only commonality of all documents, achieving average improvements of up to 1.54 and 1.42 in ROUGE-1 and ROUGE-L scores, respectively. Additionally, the method demonstrates significant superiority over another variant method that considers only the specificity of subclasses, achieving average improvements of up to 2.16 and 2.01 in ROUGE-1 and ROUGE-L scores, respectively. Furthermore, extensive experiments on DUC'2004 and Multi-News datasets show that the proposed method outperforms lots of competitive supervised and unsupervised multi-document summarization methods and yields considerable results.
引用
收藏
页码:54371 / 54381
页数:11
相关论文
共 30 条
  • [1] [Anonymous], 2009, TECHNIA-Int. J. Comput. Sci. Commun. Technol., V2, P325
  • [2] [Anonymous], 2008, SIGIR, DOI DOI 10.1145/1390334.1390387
  • [3] Cer D, 2018, CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018): PROCEEDINGS OF SYSTEM DEMONSTRATIONS, P169
  • [4] OCCAMS - An Optimal Combinatorial Covering Algorithm for Multi-document Summarization
    Davis, Sashka T.
    Conroy, John M.
    Schlesinger, Judith D.
    [J]. 12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2012), 2012, : 454 - 463
  • [5] NEW METHODS IN AUTOMATIC EXTRACTING
    EDMUNDSON, HP
    [J]. JOURNAL OF THE ACM, 1969, 16 (02) : 264 - +
  • [6] LexRank: Graph-based lexical centrality as salience in text summarization
    Erkan, G
    Radev, DR
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2004, 22 : 457 - 479
  • [7] Fabbri AR, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P1074
  • [8] Recent automatic text summarization techniques: a survey
    Gambhir, Mahak
    Gupta, Vishal
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2017, 47 (01) : 1 - 66
  • [9] Gehrmann S, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P4098
  • [10] Hokamp C, 2020, Arxiv, DOI arXiv:2006.08748