Robust PACm: Training Ensemble Models Under Misspecification and Outliers

被引:3
作者
Zecchin, Matteo [1 ]
Park, Sangwoo [1 ]
Simeone, Osvaldo [1 ]
Kountouris, Marios [2 ]
Gesbert, David [2 ]
机构
[1] Kings Coll London, Dept Engn, Kings Commun Learning & Informat Proc KCLIP Lab, London WC2R 2LS, England
[2] EURECOM, Commun Syst Dept, F-06410 Sophia Antipolis, France
基金
英国工程与自然科学研究理事会;
关键词
Bayes methods; Pollution measurement; Standards; Europe; Training; Robustness; Predictive models; Bayesian learning; ensemble models; machine learning; misspecification; outliers; robustness; BAYESIAN-INFERENCE;
D O I
10.1109/TNNLS.2023.3295168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Standard Bayesian learning is known to have suboptimal generalization capabilities under misspecification and in the presence of outliers. Probably approximately correct (PAC)-Bayes theory demonstrates that the free energy criterion minimized by Bayesian learning is a bound on the generalization error for Gibbs predictors (i.e., for single models drawn at random from the posterior) under the assumption of sampling distributions uncontaminated by outliers. This viewpoint provides a justification for the limitations of Bayesian learning when the model is misspecified, requiring ensembling, and when data are affected by outliers. In recent work, PAC-Bayes bounds-referred to as PACm-were derived to introduce free energy metrics that account for the performance of ensemble predictors, obtaining enhanced performance under misspecification. This work presents a novel robust free energy criterion that combines the generalized logarithm score function with PACm ensemble bounds. The proposed free energy training criterion produces predictive distributions that are able to concurrently counteract the detrimental effects of misspecification-with respect to both likelihood and prior distribution-and outliers.
引用
收藏
页码:16518 / 16532
页数:15
相关论文
共 50 条
[21]   THAT PRASAD-RAO IS ROBUST: ESTIMATION OF MEAN SQUARED PREDICTION ERROR OF OBSERVED BEST PREDICTOR UNDER POTENTIAL MODEL MISSPECIFICATION [J].
Liu, Xiaohui ;
Ma, Haiqiang ;
Jiang, Jiming .
STATISTICA SINICA, 2022, 32 :2217-2240
[22]   Robust empirical likelihood for linear models under median constraints [J].
Shi, J ;
Lau, TS .
COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 1999, 28 (10) :2465-2476
[23]   Robust population management under uncertainty for structured population models [J].
Deines, A. ;
Peterson, E. ;
Boeckner, D. ;
Boyle, J. ;
Keighley, A. ;
Kogut, J. ;
Lubben, J. ;
Rebarber, R. ;
Ryan, R. ;
Tenhumberg, B. ;
Townley, S. ;
Tyre, A. J. .
ECOLOGICAL APPLICATIONS, 2007, 17 (08) :2175-2183
[24]   How Robust Is a Multi-Model Ensemble Mean of Conceptual Hydrological Models to Climate Change? [J].
Kimizuka, Takayuki ;
Sawada, Yohei .
WATER, 2022, 14 (18)
[25]   A simulation study for count data models under varying degrees of outliers and zeros [J].
Tuzen, Fatih ;
Erbas, Semra ;
Olmus, Hulya .
COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2020, 49 (04) :1078-1088
[26]   Robust Training of Social Media Image Classification Models [J].
Alam, Firoj ;
Alam, Tanvirul ;
Ofli, Ferda ;
Imran, Muhammad .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (01) :546-565
[27]   Pearson type VII distribution-based robust Kalman filter under outliers interference [J].
Peng, Yun ;
Wu Pan-long ;
Shan, He .
IET RADAR SONAR AND NAVIGATION, 2019, 13 (08) :1389-1399
[28]   Improving Discriminative Training for Robust Acoustic Models in Large Vocabulary Continuous Speech Recognition [J].
Pylkkonen, Janne ;
Kurimo, Mikko .
13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, :1210-1213
[29]   Robust deep neural network surrogate models with uncertainty quantification via adversarial training [J].
Zhang, Lixiang ;
Li, Jia .
STATISTICAL ANALYSIS AND DATA MINING, 2023, 16 (03) :295-304
[30]   Reachable sets of classifiers and regression models: (non-)robustness analysis and robust training [J].
Anna-Kathrin Kopetzki ;
Stephan Günnemann .
Machine Learning, 2021, 110 :1175-1197