Using multilayer perceptrons as receptive fields in the design of neural networks

被引:15
作者
Cimino, Mario G. C. A. [1 ]
Pedrycz, Witold [2 ,3 ]
Lazzerini, Beatrice [1 ]
Marcelloni, Francesco [1 ]
机构
[1] Univ Pisa, Dipartimento Ingn Informaz Elettron Informat Tele, I-56122 Pisa, Italy
[2] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 2G7, Canada
[3] Polish Acad Sci, Syst Res Inst, PL-01447 Warsaw, Poland
基金
加拿大自然科学与工程研究理事会;
关键词
Conditional clustering; Local modeling; Neural receptive fields; Radial basis function (RBF) networks; Referential neural networks; REGRESSION; ALGORITHM; MIXTURES;
D O I
10.1016/j.neucom.2008.10.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new neural network architecture based on a family of referential multilayer perceptrons (RMLPs)that play a role of generalized receptive fields. In contrast to "standard" radial basis function (RBF) neural networks, the proposed topology of the network offers a considerable level of flexibility as the resulting receptive fields are highly diversified and capable of adjusting themselves to the characteristics of the locally available experimental data. We discuss in detail a design strategy of the novel architecture that fully exploits the modeling capabilities of the contributing RMLPs. The strategy comprises three phases. In the first phase, we form a "blueprint" of the network by employing a specialized version of the commonly encountered fuzzy C-means (FCM) clustering algorithm, namely the conditional (context-based) FCM. In this phase our intent is to generate a collection of information granules (fuzzy sets) in the space of input and output variables, narrowed down to some certain contexts. In the second phase, based upon a global view at the structure, we refine the input-output relationships by engaging a collection of RMLPs where each RMLP is trained by using the subset of data associated with the corresponding context fuzzy set. During training each receptive field focuses on the characteristics of these locally available data and builds a nonlinear mapping in a referential mode. Finally, the connections of the receptive fields are optimized through global minimization of the linear aggregation unit located at the output layer of the overall architecture. We also include a series of numeric experiments involving synthetic and real-world data sets which provide a thorough comparative analysis with standard RBF neural networks. (c) 2008 Elsevier B.V. All rights reserved.
引用
收藏
页码:2536 / 2548
页数:13
相关论文
共 53 条
  • [1] The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning
    Abu-Mostafa, Yaser S.
    [J]. NEURAL COMPUTATION, 1989, 1 (03) : 312 - 317
  • [2] [Anonymous], LECT NOTES COMPUT SC, DOI DOI 10.1007/BFB0020283
  • [3] [Anonymous], CLUSTERING CLASSIFIC
  • [4] [Anonymous], P INT JOINT C NEUR N
  • [5] What Size Net Gives Valid Generalization?
    Baum, Eric B.
    Haussler, David
    [J]. NEURAL COMPUTATION, 1989, 1 (01) : 151 - 160
  • [6] Blanzieri E, 1995, LECT NOTES ARTIF INT, V992, P267
  • [7] Blanzieri E., 2003, THEORETICAL INTERPRE
  • [8] Brizzotti MM, 1999, IEE CONF PUBL, P87, DOI 10.1049/cp:19990287
  • [9] ORTHOGONAL LEAST-SQUARES LEARNING ALGORITHM FOR RADIAL BASIS FUNCTION NETWORKS
    CHEN, S
    COWAN, CFN
    GRANT, PM
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (02): : 302 - 309
  • [10] OPTIMAL ADAPTIVE K-MEANS ALGORITHM WITH DYNAMIC ADJUSTMENT OF LEARNING RATE
    CHINRUNGRUENG, C
    SEQUIN, CH
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (01): : 157 - 169