An Error Analysis of Generative Adversarial Networks for Learning Distributions

被引:0
作者
Huang, Jian [1 ]
Jiao, Yuling [2 ]
Li, Zhen [1 ]
Liu, Shiao [1 ]
Wang, Yang [3 ]
Yang, Yunfei [3 ]
机构
[1] Department of Statistics and Actuarial Science, University of Iowa, Iowa City,IA, United States
[2] School of Mathematics and Statistics, Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan, China
[3] Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong
基金
中国国家自然科学基金;
关键词
Risk assessment - Deep neural networks - Errors - Sampling - Network architecture - Probability distributions;
D O I
暂无
中图分类号
学科分类号
摘要
This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through Holder classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have Holder densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest. © 2022 Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang and Yunfei Yang.
引用
收藏
相关论文
empty
未找到相关数据