TSTG: A Text Style Transfer Model Based on Generative Adversarial Networks

被引:0
作者
Mu, Zhiying [1 ]
Wang, Zhitai [1 ]
Peng, Litao [1 ]
Lin, Shengchuan [1 ]
机构
[1] Northwestern Polytech Univ, Xian 710072, Shaanxi, Peoples R China
关键词
Internet of Things; Semantics; Accuracy; Training; Generative adversarial networks; Reinforcement learning; Generators; Data models; Transformers; Data mining; Discriminator; generative adversarial network (GAN); generator; style transfer;
D O I
10.1109/JIOT.2025.3541088
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Text style transfer (TST) models are gaining considerable prominence in the field of Internet of Things (IoT) applications. Nonetheless, conventional encoder-decoder frameworks encounter constraints stemming from their dependence on parallel corpora and rigid architectures. This article introduces an innovative TST model based on generative adversarial networks (TSTG) and reinforcement learning methodologies to augment the efficacy of text-style transfer. The proposed generator model utilizes a sequence to sequence (Seq2Seq) architecture to process the original text alongside the target style attributes for effective conversion, while the discriminator assesses the generated text through style judgment and ranking scores to ensure high-quality output. Moreover, a new classification model named Bidirectional encoder representations from transformers-text convolutional neural networks (BTNN) improves the style evaluation of utterances. By alternately training the generator and discriminator, this approach significantly enhances fluency and accuracy, as well as addresses common challenges including gradient propagation during the training and detection of obscure-style expressions. Comprehensive evaluations demonstrate that the model achieves superior style accuracy, content preservation, and text fluency, marking a significant advancement in TST methodologies.
引用
收藏
页码:18365 / 18375
页数:11
相关论文
共 42 条
[1]  
Gatys LA, 2015, Arxiv, DOI [arXiv:1505.07376, 10.48550/arXiv.1505.07376]
[2]  
Bubeck S, 2023, Arxiv, DOI arXiv:2303.12712
[3]  
[陈佛计 Chen Foji], 2021, [计算机学报, Chinese Journal of Computers], V44, P347
[4]  
Chen X, 2016, Arxiv, DOI [arXiv:1606.03657, DOI 10.48550/ARXIV.1606.03657]
[5]  
Fu ZX, 2018, AAAI CONF ARTIF INTE, P663
[6]   StyleNet: Generating Attractive Visual Captions with Styles [J].
Gan, Chuang ;
Gan, Zhe ;
He, Xiaodong ;
Gao, Jianfeng ;
Deng, Li .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :955-964
[7]  
Gong HY, 2019, Arxiv, DOI arXiv:1903.10671
[8]  
Guo JX, 2018, AAAI CONF ARTIF INTE, P5141
[9]   Resource Allocation for Aerial Assisted Digital Twin Edge Mobile Network [J].
Guo, Qi ;
Tang, Fengxiao ;
Kato, Nei .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (10) :3070-3079
[10]  
[郭森森 Guo Sensen], 2022, [信息网络安全, Netinfo Security], P7