Application of Improved PF-AFN in Virtual Try-on

被引:0
|
作者
Han C. [1 ]
Li J. [1 ]
Wang Z. [1 ]
机构
[1] School of Electronic Information and Artificial Intelligence, Shaanxi University of Science & Technology, Xi'an
来源
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics | 2023年 / 35卷 / 10期
关键词
appearance flow; human parsing; image generation network; virtual try-on;
D O I
10.3724/SP.J.1089.2023.19596
中图分类号
学科分类号
摘要
An improved virtual try-on method is proposed to solve the problems of insufficient accuracy of the appearance flow predicted and poor generalization ability in PF-AFN. Firstly, to decouple the shape and style of clothing, we synthesize a human parsing map aligned with the human in target clothes by a human body prediction module. Then, based on the collinearity of the affine transformation and the characteristics of the appearance flow, the collinearity loss term and the distance loss term are added to constrain the deformation process and on local regions accordingly. Finally, the human parsing map and the original input are concated by channel as the input of the generation network and the UNet++-like generation network based on ResNet is used to obtain the ultimate virtual try-on images. A comparative experiment is executed on the VITON dataset with other 4 state-of-the-art methods. It shows that the method proposed improves the SSIM, FID and LPIPS by 1.2%, 11.1% and 5.8% respectively compared with the optimal method. The image clarity and inception score are comparable to the current state-of-the-art methods. On the whole, the proposed method solves the original problems and achieves better results. © 2023 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:1500 / 1509
页数:9
相关论文
共 37 条
  • [1] Ge Y Y, Song Y B, Zhang R M, Et al., Parser-free virtual try-on via distilling appearance flows, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8481-8489, (2021)
  • [2] Kingma D P, Welling M., Auto-encoding variational Bayes
  • [3] Goodfellow I, Pouget-Abadie J, Mirza M, Et al., Generative adversarial nets, Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 2672-2680, (2014)
  • [4] Lassner C, Pons-Moll G, Gehler P V., A generative model of people in clothing, Proceedings of the IEEE International Conference on Computer Vision, pp. 853-862, (2017)
  • [5] Zhu S Z, Fidler S, Urtasun R, Et al., Be your own prada: fashion synthesis with structural coherence, Proceedings of the IEEE International Conference on Computer Vision, pp. 1689-1697, (2017)
  • [6] Jetchev N, Bergmann U., The conditional analogy GAN: swapping fashion articles on people images, Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2287-2292, (2017)
  • [7] Kubo S, Iwasawa Y, Matsuo Y., Generative adversarial network-based virtual try-on with clothing region
  • [8] Men Y F, Mao Y M, Jiang Y N, Et al., Controllable person image synthesis with attribute-decomposed GAN, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5083-5092, (2020)
  • [9] Raffiee A H, Sollami M., GarmentGAN: photo-realistic adversarial fashion transfer, Proceedings of the 25th International Conference on Pattern Recognition, pp. 3923-3930, (2021)
  • [10] Han X T, Wu Z X, Wu Z, Et al., VITON: an image-based virtual try-on network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7543-7552, (2018)