Unsupervised Style Transfer in News Headlines via Discrete Style Space

被引:0
作者
Liu, Qianhui [1 ]
Gao, Yang [1 ]
Yang, Yizhe [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
来源
CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023 | 2023年 / 14232卷
关键词
D O I
10.1007/978-981-99-6207-5_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of headline style transfer in this paper is to make a headline more attractive while maintaining its meaning. The absence of parallel training data is one of the main problems in this field. In this work, we design a discrete style space for unsupervised headline style transfer, short for D-HST. This model decomposes the style-dependent text generation into content-feature extraction and style modelling. Then, generation decoder receives input from content, style, and their mixing components. In particular, it is considered that textual style signal is more abstract than the text itself. Therefore, we propose to model the style representation space as a discrete space, and each discrete point corresponds to a particular category of the styles that can be elicited by syntactic structure. Finally, we provide a new style-transfer dataset, named as TechST, which focuses on transferring news headline into those that are more eye-catching in technical social media. In the experiments, we develop two automatic evaluation metrics - style transfer rate (STR) and style-content trade-off (SCT) - along with a few traditional criteria to assess the overall effectiveness of the style transfer. In addition, the human evaluation is thoroughly conducted in terms of assessing the generation quality and creatively mimicking a scenario in which a user clicks on appealing headlines to determine the click-through rate. Our results indicate the D-HST achieves state-of-the-art results in these comprehensive evaluations.
引用
收藏
页码:91 / 105
页数:15
相关论文
共 30 条
  • [1] Dai N, 2019, Arxiv, DOI arXiv:1905.05621
  • [2] StyleNet: Generating Attractive Visual Captions with Styles
    Gan, Chuang
    Gan, Zhe
    He, Xiaodong
    Gao, Jianfeng
    Deng, Li
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 955 - 964
  • [3] Guo Qipeng, 2021, P MACHINE LEARNING R, P1828
  • [4] Hosking T, 2021, Arxiv, DOI arXiv:2105.15053
  • [5] Hu Zhiting, 2017, P MACHINE LEARNING R, V70
  • [6] Jin D, 2020, Arxiv, DOI arXiv:2004.01980
  • [7] Deep Learning for Text Style Transfer: A Survey
    Jin, Di
    Jin, Zhijing
    Hu, Zhiting
    Vechtomova, Olga
    Mihalcea, Rada
    [J]. COMPUTATIONAL LINGUISTICS, 2022, 48 (01) : 155 - 205
  • [8] John V., 2018, arXiv
  • [9] Khalid O., 2020, P INT AAAI C WEB SOC, V14, P360
  • [10] Lai HY, 2021, Arxiv, DOI arXiv:2105.06947