β-Cores: Robust Large-Scale Bayesian Data Summarization in the Presence of Outliers

被引:0
作者
Manousakas, Dionysis [1 ]
Mascolo, Cecilia [1 ]
机构
[1] Univ Cambridge, Cambridge, England
来源
WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2021年
关键词
Scalable learning; Big data summarizations; Coresets; Variational inference; Robust statistics; Noisy observations; Data valuation;
D O I
10.1145/3437963.3441793
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern machine learning applications should be able to address the intrinsic challenges arising over inference on massive real-world datasets, including scalability and robustness to outliers. Despite the multiple benefits of Bayesian methods (such as uncertainty-aware predictions, incorporation of experts knowledge, and hierarchical modeling), the quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model, which is impossible to guarantee in practice. In this work, we propose a variational inference method that, in a principled way, can simultaneously scale to large datasets, and robustify the inferred posterior with respect to the existence of outliers in the observed data. Reformulating Bayes theorem via the beta-divergence, we posit a robustified generalized Bayesian posterior as the target of inference. Moreover, relying on the recent formulations of Riemannian coresets for scalable Bayesian inference, we propose a sparse variational approximation of the robustified posterior and an efficient stochastic black-box algorithm to construct it. Overall our method allows releasing cleansed data summaries that can be applied broadly in scenarios involving structured and unstructured data contamination. We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models, including Gaussian mean inference, logistic and neural linear regression, demonstrating its superiority to existing Bayesian summarization methods in the presence of outliers.
引用
收藏
页码:940 / 948
页数:9
相关论文
共 59 条
[1]   Patterns of Scalable Bayesian Inference [J].
Angelino, Elaine ;
Johnson, Matthew James ;
Adams, Ryan P. .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2016, 9 (2-3) :I-+
[2]  
[Anonymous], 2012, P 29 INT C INT C MAC
[3]   The security of machine learning [J].
Barreno, Marco ;
Nelson, Blaine ;
Joseph, Anthony D. ;
Tygar, J. D. .
MACHINE LEARNING, 2010, 81 (02) :121-148
[4]   Robust and efficient estimation by minimising a density power divergence [J].
Basu, A ;
Harris, IR ;
Hjort, NL ;
Jones, MC .
BIOMETRIKA, 1998, 85 (03) :549-559
[5]  
Berger J. O., 1994, TEST: An Off. J. Span. Soc. Stat. Oper. Res., V3, P5, DOI DOI 10.1007/BF02562676
[6]  
Bhatia K., 2019, ARXIV PREPRINT
[7]  
Campbell T., 2019, Advances in Neural Information Processing Systems
[8]  
Campbell T, 2019, J MACH LEARN RES, V20
[9]   Families of Alpha- Beta- and Gamma- Divergences: Flexible and Robust Measures of Similarities [J].
Cichocki, Andrzej ;
Amari, Shun-ichi .
ENTROPY, 2010, 12 (06) :1532-1568
[10]   Minimum Scoring Rule Inference [J].
Dawid, A. Philip ;
Musio, Monica ;
Ventura, Laura .
SCANDINAVIAN JOURNAL OF STATISTICS, 2016, 43 (01) :123-138