Two issues in using mixtures of polynomials for inference in hybrid Bayesian networks

被引:19
作者
Shenoy, Prakash P. [1 ]
机构
[1] Univ Kansas, Sch Business, Lawrence, KS 66045 USA
关键词
Inference in hybrid Bayesian networks; Mixtures of polynomials; Conditional linear Gaussian distributions; Lagrange interpolating polynomials; Chebyshev points; Conditional log-normal distributions;
D O I
10.1016/j.ijar.2012.01.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We discuss two issues in using mixtures of polynomials (MOPS) for inference in hybrid Bayesian networks. MOPs were proposed by Shenoy and West for mitigating the problem of integration in inference in hybrid Bayesian networks. First, in defining MOP for multidimensional functions, one requirement is that the pieces where the polynomials are defined are hypercubes. In this paper, we discuss relaxing this condition so that each piece is defined on regions called hyper-rhombuses. This relaxation means that MOPs are closed under transformations required for multi-dimensional linear deterministic conditionals, such as Z = X + Y, etc. Also, this relaxation allows us to construct MOP approximations of the probability density functions (PDFs) of the multi-dimensional conditional linear Gaussian distributions using a MOP approximation of the PDF of the univariate standard normal distribution. Second, Shenoy and West suggest using the Taylor series expansion of differentiable functions for finding MOP approximations of PDFs. In this paper, we describe a new method for finding MOP approximations based on Lagrange interpolating polynomials (LIP) with Chebyshev points. We describe how the LIP method can be used to find efficient MOP approximations of PDFs. We illustrate our methods using conditional linear Gaussian PDFs in one, two, and three dimensions, and conditional log-normal PDFs in one and two dimensions. We compare the efficiencies of the hyper-rhombus condition with the hypercube condition. Also, we compare the LIP method with the Taylor series method. (C) 2012 Elsevier Inc. All rights reserved.
引用
收藏
页码:847 / 866
页数:20
相关论文
共 24 条
[1]  
[Anonymous], 2015, Cengage learning
[2]   Approximating probability density functions in hybrid Bayesian networks with mixtures of truncated exponentials [J].
Cobb, Barry R. ;
Shenoy, Prakash P. ;
Rumi, Rafael .
STATISTICS AND COMPUTING, 2006, 16 (03) :293-308
[3]  
Fernandez A., 2010, P 5 EUR WORKSH PROB, P137
[4]  
Gilks W., 1996, Markov Chain Monte Carlo in practice, DOI DOI 10.1201/B14835
[5]  
Gogate V., 2005, P 21 C UNCERTAINTY A, P209
[6]  
KOZLOV D, 1997, P 13 C UNC ART INT, P302
[7]   ON INFORMATION AND SUFFICIENCY [J].
KULLBACK, S ;
LEIBLER, RA .
ANNALS OF MATHEMATICAL STATISTICS, 1951, 22 (01) :79-86
[8]   Parameter estimation and model selection for mixtures of truncated exponentials [J].
Langseth, Helge ;
Nielsen, Thomas D. ;
Rumi, Rafael ;
Salmeron, Antonio .
INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 2010, 51 (05) :485-498
[9]   Stable local computation with conditional Gaussian distributions [J].
Lauritzen, SL ;
Jensen, F .
STATISTICS AND COMPUTING, 2001, 11 (02) :191-203
[10]  
Lerner U., 2001, Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence (UAI-01), V17, P319