Gaussian sum approach with optimal experiment design for neural network

被引:0
作者
Hering, Pavel [1 ]
Simandl, Miroslav [2 ]
机构
[1] Univ West Bohemia, Fac Sci Appl, Dept Cybernet, Univ 8, Plzen 30614, Czech Republic
[2] Univ West Bohemia, Fac Sci Appl, Dept Cybernet, Univ 8, Plzen 30614, Czech Republic
来源
PROCEEDINGS OF THE NINTH IASTED INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING | 2007年
关键词
system identification; optimal experiment design; nonlinear parameters estimation; probability density function; multi-layer perceptron network;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
System identification is a discipline for construction of mathematical models of stochastic systems based on measured experimental data. Significant role in the system identification plays a selection of input signal which influences quality of obtained model. Design of optimal input signal for system modeled by multi-layer perceptron network is treated. Because the true system is unknown, the design can be constructed only from the actually obtained model. However, neural networks with the same structure differing only in parameters values are able to approximate various nonlinear mappings therefore it is crucial maximally to use available informations to select suitable input data. Hence a global estimation method allowing to determine conditional probability density functions of network parameters will be used. The Gaussian sum approach based on approximation of arbitrary probability density function by a sum of normal distributions seems to be suitable to use. This approach is a less computationally demanding alternative to the sequential Monte Carlo methods and gives better results than the commonly used prediction error methods. The properties of the proposed experimental design are demonstrated in a numerical example.
引用
收藏
页码:425 / +
页数:2
相关论文
共 19 条
  • [1] Atkinson A.C., 1992, OPTIMUM EXPT DESIGNS
  • [2] Neural network exploration using optimal experiment design
    Cohn, DA
    [J]. NEURAL NETWORKS, 1996, 9 (06) : 1071 - 1083
  • [3] Cybenko G., 1989, MATH CONTROL SIGNAL, V2, P304
  • [4] DAVID JC, 1992, NEURAL COMPUT, V4, P590
  • [5] Sequential Monte Carlo methods to train neural network models
    de Freitas, JFG
    Niranjan, M
    Gee, AH
    Doucet, A
    [J]. NEURAL COMPUTATION, 2000, 12 (04) : 955 - 993
  • [6] FABRI GS, 2001, FUNCTIONAL ADAPTIVE
  • [7] HERING P, 2006, 7 PORT C AUT CONTR C
  • [8] From experiment design to closed-loop control
    Hjalmarsson, H
    [J]. AUTOMATICA, 2005, 41 (03) : 393 - 438
  • [9] Norgaard M., 2000, ADV TK CONT SIGN PRO
  • [10] PAASS G, 1995, ADV NEURAL INFORM PR, V7