RANDOM NEURAL NETWORK MODEL FOR SUPERVISED LEARNING PROBLEMS

被引:4
作者
Basterrech, S. [1 ]
Rubino, G. [2 ]
机构
[1] VSB Tech Univ Ostrava, Natl Supercomp Ctr, Ostrava, Czech Republic
[2] INRIA Rennes, F-35042 Rennes, France
关键词
neural networks; random neural networks; supervised learning; pattern recognition; G-networks; QUEUING-NETWORKS; PACKET NETWORK; CLASSIFICATION; MATRIX;
D O I
10.14311/NNW.2015.25.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that can also be seen as a specific type of queuing network. They have been successfully used in several domains during the last 25 years, as queuing networks to analyze the performance of resource sharing in many engineering areas, as learning tools and in combinatorial optimization, where they are seen as neural systems, and also as models of neurological aspects of living beings. In this article we focus on their learning capabilities, and more specifically, we present a practical guide for using the RNN to solve supervised learning problems. We give a general description of these models using almost indistinctly the terminology of Queuing Theory and the neural one. We present the standard learning procedures used by RNNs, adapted from similar well-established improvements in the standard NN field. We describe in particular a set of learning algorithms covering techniques based on the use of first order and, then, of second order derivatives. We also discuss some issues related to these objects and present new perspectives about their use in supervised learning problems. The tutorial describes their most relevant applications, and also provides a large bibliography.
引用
收藏
页码:457 / 499
页数:43
相关论文
共 50 条
  • [41] Dynamics of spiking map-based neural networks in problems of supervised learning
    Pugavko, Mechislav M.
    Maslennikov, Oleg, V
    Nekorkin, Vladimir, I
    COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, 2020, 90 (90):
  • [42] Basis operator network: A neural network-based model for learning nonlinear operators via neural basis
    Hua, Ning
    Lu, Wenlian
    NEURAL NETWORKS, 2023, 164 : 21 - 37
  • [43] Neural network based PD source classification using a combined topology of unsupervised and supervised learning algorithm
    Chang, C
    Su, Q
    2001 POWER ENGINEERING SOCIETY SUMMER MEETING, VOLS 1-3, CONFERENCE PROCEEDINGS, 2001, : 1293 - 1298
  • [44] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2, 2018, 16 : 450 - 462
  • [45] Neural Field: Supervised Apportioned Incremental Learning (SAIL)
    Lovinger, Justin
    Valova, Iren
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 2500 - 2506
  • [46] COSNet: A Cost Sensitive Neural Network for Semi-supervised Learning in Graphs
    Bertoni, Alberto
    Frasca, Marco
    Valentini, Giorgio
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT I, 2011, 6911 : 219 - 234
  • [47] A MATLAB toolbox for Self Organizing Maps and supervised neural network learning strategies
    Ballabio, Davide
    Vasighi, Mandi
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2012, 118 : 24 - 32
  • [48] A STOCHASTIC PARALLEL ALGORITHM FOR SUPERVISED LEARNING IN NEURAL NETWORKS
    PANDYA, AS
    VENUGOPAL, KP
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1994, E77D (04) : 376 - 384
  • [49] A New Supervised Spiking Neural Network
    Zhang Chun-wei
    Liu Liu-jiang
    ICICTA: 2009 SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTATION TECHNOLOGY AND AUTOMATION, VOL I, PROCEEDINGS, 2009, : 23 - 26
  • [50] Random Search Algorithm with Self-Learning for Neural Network Training
    V. A. Kostenko
    L. E. Seleznev
    Optical Memory and Neural Networks, 2021, 30 : 180 - 186