A general framework for quantifying aleatoric and epistemic uncertainty in graph neural networks

被引:8
作者
Munikoti, Sai [1 ]
Agarwal, Deepesh [1 ]
Das, Laya [2 ]
Natarajan, Balasubramaniam [1 ]
机构
[1] Kansas State Univ, Dept Elect & Comp Engn, Manhattan, KS 66506 USA
[2] Swiss Fed Inst Technol, Reliabil & Risk Engn Lab, CH-8092 Zurich, Switzerland
基金
美国国家科学基金会;
关键词
Uncertainty quantification; Graph neural network; Bayesian model; Assumed density filtering; Node classification; DROPOUT;
D O I
10.1016/j.neucom.2022.11.049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning for modeling and analysis of networked data. We consider the problem of quan-tifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncer-tainty. We consider aleatoric uncertainty in the form of probabilistic links and noise in feature vector of nodes, while epistemic uncertainty is incorporated via a probability distribution over the model param-eters. We propose a unified approach to treat both sources of uncertainty in a Bayesian framework, where Assumed Density Filtering is used to quantify aleatoric uncertainty and Monte Carlo dropout captures uncertainty in model parameters. Finally, the two sources of uncertainty are aggregated to estimate the total uncertainty in predictions of a GNN. Results in the real-world datasets demonstrate that the Bayesian model performs at par with a frequentist model and provides additional information about pre-dictions uncertainty that are sensitive to uncertainties in the data and model.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:1 / 10
页数:10
相关论文
共 50 条
[1]   A review of uncertainty quantification in deep learning: Techniques, applications and challenges [J].
Abdar, Moloud ;
Pourpanah, Farhad ;
Hussain, Sadiq ;
Rezazadegan, Dana ;
Liu, Li ;
Ghavamzadeh, Mohammad ;
Fieguth, Paul ;
Cao, Xiaochun ;
Khosravi, Abbas ;
Acharya, U. Rajendra ;
Makarenkov, Vladimir ;
Nahavandi, Saeid .
INFORMATION FUSION, 2021, 76 :243-297
[2]   Predicting protein complex membership using probabilistic network reliability [J].
Asthana, S ;
King, OD ;
Gibbons, FD ;
Roth, FP .
GENOME RESEARCH, 2004, 14 (06) :1170-1175
[3]  
Blundell C, 2015, PR MACH LEARN RES, V37, P1613
[4]  
Boyen X, 2013, Arxiv, DOI [arXiv:1301.7362, 10.48550/arXiv.1301.7362, DOI 10.48550/ARXIV.1301.7362]
[5]  
Chen PH, 2021, PR MACH LEARN RES, V139
[6]   Variational learning in nonlinear Gaussian belief networks [J].
Frey, BJ ;
Hinton, GE .
NEURAL COMPUTATION, 1999, 11 (01) :193-213
[7]  
Gal Y, 2017, PR MACH LEARN RES, V70
[8]  
Gal Y, 2016, PR MACH LEARN RES, V48
[9]   Lightweight Probabilistic Deep Networks [J].
Gast, Jochen ;
Roth, Stefan .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3369-3378
[10]  
Guo CA, 2017, PR MACH LEARN RES, V70