A Framework for Including Uncertainty in Robustness Evaluation of Bayesian Neural Network Classifiers

被引:0
作者
Essbai, Wasim [1 ]
Bombarda, Andrea [2 ]
Bonfanti, Silvia [2 ]
Gargantini, Angelo [2 ]
机构
[1] Tech Univ Wien, Vienna, Austria
[2] Univ Bergamo, Bergamo, Italy
来源
PROCEEDINGS OF THE 2024 IEEE/ACM INTERNATIONAL WORKSHOP ON DEEP LEARNING FOR TESTING AND TESTING FOR DEEP LEARNING, DEEPTEST 2024 | 2024年
关键词
Robustness; Bayesian Neural Networks; Alterations; Uncertainty;
D O I
10.1145/3643786.3648026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks (NNs) play a crucial role in safety-critical fields, requiring robustness assurance. Bayesian Neural Networks (BNNs) address data uncertainty, providing probabilistic outputs. However, the literature on BNN robustness assessment is still limited, mainly focusing on adversarial examples, which are often impractical in real-world applications. This paper introduces a fresh perspective on BNN classifier robustness, considering natural input variations while accounting for prediction uncertainties. Our approach excludes predictions labeled as "unknown", enabling practitioners to define alteration probabilities, penalize errors beyond a specified threshold, and tolerate varying error levels below it. We present a systematic approach for evaluating the robustness of BNNs, introducing new evaluation metrics that account for prediction uncertainty. We conduct a comparative study using two NNs - standard MLP and Bayesian MLP - on the MNIST dataset. Our results show that by leveraging estimated uncertainty, it is possible to enhance the system's robustness.
引用
收藏
页码:25 / 32
页数:8
相关论文
共 28 条
[1]  
Albawi S, 2017, I C ENG TECHNOL
[2]   Robustness assessment and improvement of a neural network for blood oxygen pressure estimation [J].
Arcaini, Paolo ;
Bombarda, Andrea ;
Bonfanti, Silvia ;
Gargantini, Angelo ;
Gamba, Daniele ;
Pedercini, Rita .
2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2022), 2022, :312-322
[3]   ROBY: a Tool for Robustness Analysis of Neural Network Classifiers [J].
Arcaini, Paolo ;
Bombarda, Andrea ;
Bonfanti, Silvia ;
Gargantini, Angelo .
2021 14TH IEEE CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2021), 2021, :442-447
[4]   Dealing with Robustness of Convolutional Neural Networks for Image Classification [J].
Arcaini, Paolo ;
Bombarda, Andrea ;
Bonfanti, Silvia ;
Gargantini, Angelo .
2020 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST), 2020, :7-14
[5]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[6]  
Cardelli L, 2019, Arxiv, DOI arXiv:1903.01980
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]   A survey of uncertainty in deep neural networks [J].
Gawlikowski, Jakob ;
Tassi, Cedrique Rovile Njieutcheu ;
Ali, Mohsin ;
Lee, Jongseok ;
Humt, Matthias ;
Feng, Jianxiang ;
Kruspe, Anna ;
Triebel, Rudolph ;
Jung, Peter ;
Roscher, Ribana ;
Shahzad, Muhammad ;
Yang, Wen ;
Bamler, Richard ;
Zhu, Xiao Xiang .
ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) :1513-1589
[9]   Bayesian Neural Networks: An Introduction and Survey [J].
Goan, Ethan ;
Fookes, Clinton .
CASE STUDIES IN APPLIED BAYESIAN DATA SCIENCE: CIRM JEAN-MORLET CHAIR, FALL 2018, 2020, 2259 :45-87
[10]  
Han W. H., 2019, ARXIV