Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization

被引:7
|
作者
Zheng, Chujie [1 ]
Zhang, Kunpeng [2 ]
Wang, Harry Jiannan [1 ]
Fan, Ling [3 ,4 ]
Wang, Zhe [4 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
[2] Univ Maryland, College Pk, MD 20742 USA
[3] Tongji Univ, Shanghai, Peoples R China
[4] Tezign Com, Shanghai, Peoples R China
来源
2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2021年
关键词
Abstractive Text Summarization; Contrastive Learning; Data Augmentation; Seq2seq;
D O I
10.1109/BigData52589.2021.9671819
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a denoising sequence-to-sequence (seq2seq) autoencoder via contrastive learning for abstractive text summarization. Our model adopts a standard Transformer-based architecture with a multi-layer bi-directional encoder and an auto-regressive decoder. To enhance its denoising ability, we incorporate self-supervised contrastive learning along with various sentence-level document augmentation. These two components, seq2seq autoencoder and contrastive learning, are jointly trained through fine-tuning, w hich i mproves t he performance of text summarization with regard to ROUGE scores and human evaluation. We conduct experiments on two datasets and demonstrate that our model outperforms many existing benchmarks and even achieves comparable performance to the state-of-the-art abstractive systems trained with more complex architecture and extensive computation resources.
引用
收藏
页码:1764 / 1771
页数:8
相关论文
共 50 条
  • [21] Contrastive Enhanced Learning for Multi-Label Text Classification
    Wu, Tianxiang
    Yang, Shuqun
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [22] Diverse Image Captioning via Conditional Variational Autoencoder and Dual Contrastive Learning
    Xu, Jing
    Liu, Bing
    Zhou, Yong
    Liu, Mingming
    Yao, Rui
    Shao, Zhiwen
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (01)
  • [23] Fully Unsupervised Deepfake Video Detection Via Enhanced Contrastive Learning
    Qiao, Tong
    Xie, Shichuang
    Chen, Yanli
    Retraint, Florent
    Luo, Xiangyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (07) : 4654 - 4668
  • [24] Improving cell type identification with Gaussian noise-augmented single-cell RNA-seq contrastive learning
    Alsaggaf, Ibrahim
    Buchan, Daniel
    Wan, Cen
    BRIEFINGS IN FUNCTIONAL GENOMICS, 2024, 23 (04) : 441 - 451
  • [25] Self-supervised contrastive learning for integrative single cell RNA-seq data analysis
    Han, Wenkai
    Cheng, Yuqi
    Chen, Jiayang
    Zhong, Huawen
    Hu, Zhihang
    Chen, Siyuan
    Zong, Licheng
    Hong, Liang
    Chan, Ting-Fung
    King, Irwin
    Gao, Xin
    Li, Yu
    BRIEFINGS IN BIOINFORMATICS, 2022, 23 (05)
  • [26] Text semantic matching with an enhanced sample building method based on contrastive learning
    Wu, Lishan
    Hu, Jie
    Teng, Fei
    Li, Tianrui
    Du, Shengdong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (09) : 3105 - 3112
  • [27] Text semantic matching with an enhanced sample building method based on contrastive learning
    Lishan Wu
    Jie Hu
    Fei Teng
    Tianrui Li
    Shengdong Du
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 3105 - 3112
  • [28] CLG-Trans: Contrastive learning for code summarization via graph attention-based transformer
    Zeng, Jianwei
    He, Yutong
    Zhang, Tao
    Xu, Zhou
    Han, Qiang
    SCIENCE OF COMPUTER PROGRAMMING, 2023, 226
  • [29] Enhanced cross-prompt trait scoring via syntactic feature fusion and contrastive learning
    Jingbo Sun
    Weiming Peng
    Tianbao Song
    Haitao Liu
    Shuqin Zhu
    Jihua Song
    The Journal of Supercomputing, 2024, 80 : 5390 - 5407
  • [30] Enhanced cross-prompt trait scoring via syntactic feature fusion and contrastive learning
    Sun, Jingbo
    Peng, Weiming
    Song, Tianbao
    Liu, Haitao
    Zhu, Shuqin
    Song, Jihua
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (04): : 5390 - 5407