Subset-of-Data Variational Inference for Deep Gaussian-Processes Regression

被引:0
作者
Jain, Ayush [1 ]
Srijith, P. K. [1 ]
Khan, Mohammad Emtiyaz [2 ]
机构
[1] Indian Inst Technol Hyderabad, Dept Comp Sci & Engn, Hyderabad, India
[2] RIKEN Ctr AI Project, Tokyo, Japan
来源
UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 161 | 2021年 / 161卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian Processes but their training remains challenging. Sparse approximations simplify the training but often require optimization over a large number of inducing inputs and their locations across layers. In this paper, we simplify the training by setting the locations to a fixed subset of data and sampling the inducing inputs from a variational distribution. This reduces the trainable parameters and computation cost without significant performance degradations, as demonstrated by our empirical results on regression problems. Our modifications simplify and stabilize DGP training while making it amenable to sampling schemes for setting the inducing inputs.
引用
收藏
页码:1362 / 1370
页数:9
相关论文
共 14 条
[1]  
Alaoui A., 2015, Advances in Neural Information Processing Systems, V28
[2]  
Bui TD, 2016, PR MACH LEARN RES, V48
[3]  
Cutajar K, 2017, PR MACH LEARN RES, V70
[4]  
Dai Z., 2016, INT C LEARN REPR ICL
[5]  
Damianou A., 2015, Deep Gaussian processes and variational propagation of uncertainty
[6]  
Damianou A., 2013, Deep Gaussian Processes, P207, DOI DOI 10.48550/ARXIV.1211.0358
[7]  
Havasi M, 2018, ADV NEUR IN, V31
[8]  
Hensman J, 2014, Arxiv, DOI arXiv:1412.1370
[9]  
Hensman James, 2013, 29 C UNC ART INT UAI
[10]  
Kathuria Tarun, 2016, Advances in Neural Information Processing Systems, P4206