Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks

被引:7
作者
Krishnan, Ranganath [1 ]
Subedar, Mahesh [1 ]
Tickoo, Omesh [1 ]
Labs, Intel [1 ]
机构
[1] Intel Labs, Hillsboro, OR 97124 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2019年
关键词
D O I
10.1109/ICCVW.2019.00102
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic variational inference for Bayesian deep neural networks (DNNs) requires specifying priors and approximate posterior distributions for neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. Based on empirical Bayes approach, we propose Bayesian MOdel Priors Extracted from Deterministic DNN (MOPED) method to choose meaningful prior distributions over weight space using deterministic weights derived from the pretrained DNNs of equivalent architecture. We empirically evaluate the proposed approach on real-world applications including image classification, video activity recognition and audio classification tasks with varying complex neural network architectures. The proposed method enables scalable variational inference with faster training convergence and provides reliable uncertainty quantification.
引用
收藏
页码:773 / 777
页数:5
相关论文
共 31 条
  • [1] [Anonymous], 2015, ARXIV PREPRINT ARXIV
  • [2] [Anonymous], 2012, CoRR
  • [3] Atanov Andrei, 2018, DEEP WEIGHT PRIOR
  • [4] BISHOP C. M., 2006, Pattern recognition and machine learning, DOI [DOI 10.1117/1.2819119, 10.1007/978-0-387-45528-0]
  • [5] Carlin BP, 2010, BAYES EMPIRICAL BAYE
  • [6] Dillon Joshua V, 2017, TensorFlow Distributions
  • [7] Gal Y., 2016, UNCERTAINTY DEEP LEA
  • [8] Gal Y, 2016, PR MACH LEARN RES, V48
  • [9] Goodfellow I.J., 2013, MULTIDIGIT NUMBER RE
  • [10] Graves A., 2011, PROC NIPS 24 ADV NEU, V24