Federated Bayesian Deep Learning: The Application of Statistical Aggregation Methods to Bayesian Models

被引:1
作者
Fischer, John [1 ]
Orescanin, Marko [1 ]
Loomis, Justin [1 ]
Mcclure, Patrick [1 ]
机构
[1] Naval Postgrad Sch, Dept Comp Sci, Monterey, CA 93943 USA
关键词
Uncertainty; Bayes methods; Data models; Predictive models; Deep learning; Servers; Measurement uncertainty; Training; Gaussian distribution; Remote sensing; Bayesian deep learning; federated learning; Monte Carlo dropout; uncertainty decomposition; uncertainty quantification; variational inference; FORECAST UNCERTAINTY; INFLATION;
D O I
10.1109/ACCESS.2024.3513253
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is an approach to training machine learning models that takes advantage of multiple distributed datasets while maintaining data privacy and reducing communication costs associated with sharing local datasets. Aggregation strategies have been developed to pool or fuse the weights and biases of distributed deterministic models; however, modern deterministic deep learning (DL) models are often poorly calibrated and lack the ability to communicate a measure of epistemic uncertainty in prediction, which is desirable for remote sensing platforms and safety-critical applications. Conversely, Bayesian DL models are often well calibrated and capable of quantifying and communicating a measure of epistemic uncertainty along with a competitive prediction accuracy. Unfortunately, because the weights and biases in Bayesian DL models are defined by a probability distribution, simple application of the aggregation methods associated with FL schemes for deterministic models is either impossible or results in sub-optimal performance. In this work, we use independent and identically distributed (IID) and non-IID partitions of the CIFAR-10 dataset and a fully variational ResNet-20 architecture to analyze six different aggregation strategies for Bayesian DL models. Additionally, we analyze the traditional federated averaging approach applied to an approximate Bayesian Monte Carlo dropout model as a lightweight alternative to more complex variational inference methods in FL. We show that aggregation strategy is a key hyperparameter in the design of a Bayesian FL system with downstream effects on accuracy, calibration, uncertainty quantification, training stability, and client compute requirements.
引用
收藏
页码:185790 / 185806
页数:17
相关论文
共 63 条
[1]  
Abadi M., 2016, arXiv preprint arXiv:1603.04467, DOI DOI 10.48550/ARXIV.1603.04467
[2]  
[Anonymous], 2009, Cifar-10
[3]  
Beutel DJ, 2022, Arxiv, DOI arXiv:2007.14390
[4]  
Bhatt S, 2023, Arxiv, DOI arXiv:2206.07562
[5]  
Bhowmick A, 2019, Arxiv, DOI arXiv:1812.00984
[6]  
Bishop C., 2006, Springer google schola
[7]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[8]  
Blundell C, 2015, PR MACH LEARN RES, V37, P1613
[9]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[10]  
Chen H.-Y., 2021, P INT C LEARN REPR