Discretizing continuous neural networks using a polarization learning rule

被引:0
|
作者
Wang, LF [1 ]
Cheng, HD [1 ]
机构
[1] UTAH STATE UNIV,DEPT COMP SCI,LOGAN,UT 84322
关键词
neural networks; error back-propagation; grammatical inference; finite state automata; discretization; second-order recurrent networks;
D O I
10.1016/S0031-3203(96)00082-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Discrete neural networks are simpler than their continuous counterparts, can obtain more stable solutions, and their hidden layer representations are easier to interpret. This paper presents a polarization learning rule for discretizing multi-layer neural networks with continuous activation functions. This rule forces the activation value of a neuron towards the two poles of its activation function. First, we use this rule in the form of a modified error function to discretize the hidden units of a back-propagation network. Then, we apply the same principle to the second-order recurrent networks to solve grammatical inference problems. The experimental results are superior to the ones using existing approaches. Copyright (C) 1997 Pattern Recognition Society.
引用
收藏
页码:253 / 260
页数:8
相关论文
共 50 条
  • [1] Discretizing Continuous Attributes For Machine Learning Using Nonlinear Programming
    Haddouchi, Maissae
    Berrado, Abdelaziz
    International Journal of Computer Science and Applications, 2021, 18 (01): : 26 - 44
  • [2] A heterosynaptic learning rule for neural networks
    Emmert-Streib, Frank
    INTERNATIONAL JOURNAL OF MODERN PHYSICS C, 2006, 17 (10): : 1501 - 1520
  • [3] Rule extraction: Using neural networks or for neural networks?
    Zhi-Hua Zhou
    Journal of Computer Science and Technology, 2004, 19 : 249 - 253
  • [4] Rule extraction: Using neural networks or for neural networks?
    Zhou, ZH
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2004, 19 (02) : 249 - 253
  • [5] Learning Political Polarization on Social Media Using Neural Networks
    Belcastro, Loris
    Cantini, Riccardo
    Marozzo, Fabrizio
    Talia, Domenico
    Trunfio, Paolo
    IEEE ACCESS, 2020, 8 : 47177 - 47187
  • [6] A Continuous-time Learning Rule for Memristor-based Recurrent Neural Networks
    Zoppo, Gianluca
    Marrone, Francesco
    Corinto, Fernando
    2019 26TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2019, : 494 - 497
  • [7] A comparison of methods for discretizing continuous variables in Bayesian Networks
    Beuzen, Tomas
    Marshall, Lucy
    Splinter, Kristen D.
    ENVIRONMENTAL MODELLING & SOFTWARE, 2018, 108 : 61 - 66
  • [8] HOMEOSTATIC LEARNING RULE FOR ARTIFICIAL NEURAL NETWORKS
    Ruzek, M.
    NEURAL NETWORK WORLD, 2018, 28 (02) : 179 - 189
  • [9] Thermodynamic efficiency of learning a rule in neural networks
    Goldt, Sebastian
    Seifert, Udo
    NEW JOURNAL OF PHYSICS, 2017, 19
  • [10] A novel Stochastic learning rule for neural networks
    Emmert-Streib, Frank
    ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 414 - 423