Conditional Generative Adversarial Capsule Networks

被引:0
|
作者
Kong R. [1 ,2 ]
Huang G. [2 ]
机构
[1] School of Intelligent Systems Science and Engineering, Jinan University (Zhuhai Campus), Zhuhai
[2] College of Information Science and Technology, Jinan University, Guangzhou
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2020年 / 46卷 / 01期
关键词
Capsule networks (CapsNets); Conditional model; Generative adversarial networks (GAN); Image generation;
D O I
10.16383/j.aas.c180590
中图分类号
学科分类号
摘要
Generative adversarial networks (GAN) is one of the main methods of learning deep generative models in an unsupervised fashion. The generative modeling method based on a network of differential generators is the hottest research field, but due to the complexity of the real sample distribution, the GAN has many problems in the stability of training process and the quality of generation. In the field of generative modeling, the exploration of network structure is an important research direction. And in this paper, we use the capsule networks (CapsNets) to reconstruct the structure of GAN, and the loss function based on Earth-mover distance proposed in Wasserstein GAN (WGAN) is used in the training process, and then we add the conditional constraints to stabilize the model generation process. According to the above, a conditional generative adversarial capsule networks (C-CapsGAN) is established. The results of multiple simulation experiments on the MINIST and CIFAR-10 datasets show that it is feasible to apply the CapsNets to the generative modeling field. Besides, compared with the existing similar model, C-CapsGAN can not only stably produce high-quality images in the image generation task, but also inhibit the occurrence of pattern collapse more effectively. Copyright © 2020 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:94 / 107
页数:13
相关论文
共 18 条
  • [1] Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Et al., Generative adversarial nets, Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 2672-2680, (2014)
  • [2] Kurach K., Lucic M., Zhai X., Et al., The GAN Landscape: Losses, Architectures, Regularization, and Normalization, (2018)
  • [3] Wang K.-F., Gou C., Duan Y.-J., Lin Y.-L., Zheng X.-H., Wang F.-Y., Generative adversarial networks: the state of the art and beyond, Acta Electronica Sinica, 43, 3, pp. 321-332, (2017)
  • [4] Radford A., Metz L., Chintala S., Unsupervised representation learning with deep convolutional generative adversarial networks, (2015)
  • [5] Mirza M., Osindero S., Conditional generative adversarial nets, (2014)
  • [6] Arjovsky M., Bottou L., Towards principled methods for training generative adversarial networks, (2017)
  • [7] Arjovsky M., Chintala S., Bottou L., Wasserstein gan, (2017)
  • [8] Lin Y.-L., Dai X.-Y., Li L., Wang X., Wang F.-Y., The new frontier of AI research: generative adversarial networks, Acta Electronica Sinica, 44, 5, pp. 775-792, (2018)
  • [9] Lecun Y., Bottou L., Bengio Y., Et al., Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86, 11, pp. 2278-2324, (1998)
  • [10] Krizhevsky A., Sutskever I., Hinton G.E., ImageNet classification with deep convolutional neural networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097-1105, (2012)