On the Convergence of Black-Box Variational Inference

被引:0
作者
Kim, Kyurae [1 ]
Oh, Jisu [2 ]
Wu, Kaiwen [1 ]
Ma, Yi-An [3 ]
Gardner, Jacob R. [1 ]
机构
[1] Univ Penn, Philadelphia, PA 19104 USA
[2] North Carolina State Univ, Raleigh, NC USA
[3] Univ Calif San Diego, La Jolla, CA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
美国国家科学基金会;
关键词
MODELS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We provide the first convergence guarantee for black-box variational inference (BBVI) with the reparameterization gradient. While preliminary investigations worked on simplified versions of BBVI (e.g., bounded domain, bounded support, only optimizing for the scale, and such), our setup does not need any such algorithmic modifications. Our results hold for log-smooth posterior densities with and without strong log-concavity and the location-scale variational family. Notably, our analysis reveals that certain algorithm design choices commonly employed in practice, such as nonlinear parameterizations of the scale matrix, can result in suboptimal convergence rates. Fortunately, running BBVI with proximal stochastic gradient descent fixes these limitations and thus achieves the strongest known convergence guarantees. We evaluate this theoretical insight by comparing proximal SGD against other standard implementations of BBVI on large-scale Bayesian inference problems.
引用
收藏
页数:43
相关论文
共 89 条
[1]  
Agrawal A., 2021, Advances in Neural Information Processing Systems, V34, P21388
[2]  
Agrawal Abhinav., 2020, Adv. Neural Inf. Process. Syst, V33, P17358
[3]  
Ahn Kwangjun., 2020, Adv. Neural. Inf. Process. Syst, V33, P17526
[4]   CONCENTRATION OF TEMPERED POSTERIORS AND OF THEIR VARIATIONAL APPROXIMATIONS [J].
Alquier, Pierre ;
Ridgway, James .
ANNALS OF STATISTICS, 2020, 48 (03) :1475-1497
[5]  
Altosaar J, 2018, PR MACH LEARN RES, V84
[6]   Natural gradient works efficiently in learning [J].
Amari, S .
NEURAL COMPUTATION, 1998, 10 (02) :251-276
[7]  
[Anonymous], 2015, P INT C LEARN REPR
[8]  
[Anonymous], 2018, INT C MACHINE LEARNI
[9]  
[Anonymous], 2019, PMLR
[10]  
[Anonymous], 2019, PMLR