An unsupervised font style transfer model based on generative adversarial networks

被引:0
作者
Sihan Zeng
Zhongliang Pan
机构
[1] South China Normal University,Physics and Telecommunication Engineering
来源
Multimedia Tools and Applications | 2022年 / 81卷
关键词
Chinese characters; Style transfer; Generative adversarial networks; Unsupervised learning; Style-attentional networks;
D O I
暂无
中图分类号
学科分类号
摘要
Chinese characters, because of their complex structure and a large number, lead to an extremely high cost of time for designers to design a complete set of characters. As a result, the dramatic growth of characters used in various fields such as culture and business has formed a strong contradiction between supply and demand with Chinese font design. Although most of the existing Chinese characters transformation models greatly alleviate the demand for character usage, the semantics of the generated characters cannot be guaranteed and the generation efficiency is low. At the same time, the models require large amounts of paired data for training, which requires a large amount of sample processing time. To address the problems of existing methods, this paper proposes an unsupervised Chinese characters generation method based on generative adversarial networks, which fuses Style-Attentional Net to a skip-connected U-Net as a GAN generator network architecture. It effectively and flexibly integrates local style patterns based on the semantic spatial distribution of content images while retaining feature information of different sizes. Our model generates fonts that maintain the source domain content features and the target domain style features at the end of training. The addition of the style specification module and the classification discriminator allows the model to generate multiple style typefaces. The generation results show that the model proposed in this paper can perform the task of Chinese character style transfer well. The model generates high-quality images of Chinese characters and generates Chinese characters with complete structures and natural strokes. In the quantitative comparison experiments and qualitative comparison experiments, our model has more superior visual effects and image performance indexes compared with the existing models. In sample size experiments, clearly structured fonts are still generated and the model demonstrates significant robustness. At the same time, the training conditions of our model are easy to meet and facilitate generalization to real applications.
引用
收藏
页码:5305 / 5324
页数:19
相关论文
共 73 条
[1]  
Chen J(2019)Learning one-to-many stylised Chinese character transformation and generation by generative adversarial networks IET Image Process 13 2680-2686
[2]  
Ji Y(2020)Deep learning for low-dose CT denoising using perceptual loss and edge detection layer J Digit Imaging 33 504-515
[3]  
Chen H(2019)GlyphGAN: style-consistent font generation based on generative adversarial networks Knowl-Based Syst 186 104927-300
[4]  
Xu X(2020)SAR image colorization using multidomain cycle-consistency generative adversarial network IEEE Geosci Remote Sens Lett 18 296-1338
[5]  
Gholizadeh-Ansari M(2015)Human-level concept learning through probabilistic program induction Science 350 1332-81
[6]  
Alirezaie J(1999)Simulating oriental black-ink painting IEEE Comput Graph Appl 19 74-18
[7]  
Babyn P(2018)EasyFont: a style learning-based system to easily build your large-scale handwriting fonts ACM Transactions on Graphics (TOG) 38 1-334
[8]  
Hayashi H(2012)Ad-hoc Ma'qeli script generation using block cellular automata J Cell Autom 7 321-212817
[9]  
Abe K(2020)Pattern separation network based on the hippocampus activity for handwritten recognition IEEE Access 8 212803-26925
[10]  
Uchida S(2020)Deep learning approach for apple edge detection to remotely monitor apple growth in orchards IEEE Access 8 26911-113