A more powerful Random Neural Network model in supervised learning applications

被引:0
|
作者
Basterrech, Sebastian [1 ]
Rubino, Gerardo [2 ]
机构
[1] VSB Tech Univ Ostrava, IT4Innovat, Ostrava, Czech Republic
[2] INRIA, Rennes, France
来源
2013 INTERNATIONAL CONFERENCE OF SOFT COMPUTING AND PATTERN RECOGNITION (SOCPAR) | 2013年
关键词
Random Neural Network; Supervised Learning; Pattern Recognition; Numerical Optimization; Gradient Descent;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the early 1990s, Random Neural Networks (RNNs) have gained importance in the Neural Networks and Queueing Networks communities. RNNs are inspired by biological neural networks and they are also an extension of open Jackson's networks in Queueing Theory. In 1993, a learning algorithm of gradient type was introduced in order to use RNNs in supervised learning tasks. This method considers only the weight connections among the neurons as adjustable parameters. All other parameters are deemed fixed during the training process. The RNN model has been successfully utilized in several types of applications such as: supervised learning problems, pattern recognition, optimization, image processing, associative memory. In this contribution we present a modification of the classic model obtained by extending the set of adjustable parameters. The modification increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. We describe the new equations implementing a gradient descent learning technique for the model.
引用
收藏
页码:201 / 206
页数:6
相关论文
共 50 条
  • [1] RANDOM NEURAL NETWORK MODEL FOR SUPERVISED LEARNING PROBLEMS
    Basterrech, S.
    Rubino, G.
    NEURAL NETWORK WORLD, 2015, 25 (05) : 457 - 499
  • [2] A Performance Study of Random Neural Network as Supervised Learning Tool Using CUDA
    Basterrech, Sebastian
    Janousek, Jan
    Snasel, Vaclav
    JOURNAL OF INTERNET TECHNOLOGY, 2016, 17 (04): : 771 - 778
  • [3] Learning in the feed-forward random neural network: A critical review
    Georgiopoulos, Michael
    Li, Cong
    Kocak, Taskin
    PERFORMANCE EVALUATION, 2011, 68 (04) : 361 - 384
  • [4] A Novel Supervised Learning Model for Figures Recognition by Using Artificial Neural Network
    Alfawaer, Zeyad M.
    Alzoubi, Saleem
    EMERGING TECHNOLOGIES IN COMPUTING, ICETIC 2018, 2018, 200 : 199 - 208
  • [5] Supervised Associative Learning in Spiking Neural Network
    Yusoff, Nooraini
    Gruening, Andre
    ARTIFICIAL NEURAL NETWORKS-ICANN 2010, PT I, 2010, 6352 : 224 - 229
  • [6] FORCED FORMATION OF A GEOMETRICAL FEATURE SPACE BY A NEURAL-NETWORK MODEL WITH SUPERVISED LEARNING
    TAKEDA, T
    MIZOE, H
    KISHI, K
    MATSUOKA, T
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 1993, E76A (07) : 1129 - 1132
  • [7] A dynamic growing neural network for supervised or unsupervised learning
    Tian, Daxin
    Liu, Yanheng
    Wei, Da
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 2886 - 2890
  • [8] Random neural network texture model
    Gelenbe, E
    Hussain, K
    Abdelbaki, H
    APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN IMAGE PROCESSING V, 2000, 3962 : 104 - 111
  • [9] Masked convolutional neural network for supervised learning problems
    Liu, Leo Yu-Feng
    Liu, Yufeng
    Zhu, Hongtu
    STAT, 2020, 9 (01):
  • [10] Deep Reinforcement Learning with the Random Neural Network
    Serrano, Will
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 110