A one-layer recurrent neural netwark with a unipolar hard-limiting activation function for k-winners-take-all operation

被引:1
|
作者
Liu, Qingshan [1 ]
Wang, Jun [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Sha Tin, Hong Kong, Peoples R China
来源
2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6 | 2007年
关键词
D O I
10.1109/IJCNN.2007.4370935
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a one-layer recurrent neural network with a unipolar hard-limiting activation function for k-winners-take-all (kWTA) operation. The kWTA operation is first converted into an equivalent quadratic programming problem. Then a one-layer recurrent neural network is constructed. The neural network is guaranteed to be capable of performing the kWTA operation in real time. The stability and convergence of the neural network are proven by using Lyapunov and nonsmooth analysis methods.
引用
收藏
页码:84 / 89
页数:6
相关论文
共 10 条
  • [1] A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming
    Liu, Qingshan
    Wang, Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (04): : 558 - 570
  • [2] A One-Layer Dual Neural Network with a Unipolar Hard-Limiting Activation Function for Shortest-Path Routing
    Liu, Qingshan
    Wang, Jun
    ARTIFICIAL NEURAL NETWORKS-ICANN 2010, PT II, 2010, 6353 : 498 - +
  • [3] A Recurrent Neural Network with a Tunable Activation Function for Solving K-Winners-Take-All
    Miao Peng
    Shen Yanjun
    Hou Jianshu
    Shen Yi
    2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 4957 - 4962
  • [4] A Discrete-Time Recurrent Neural Network with One Neuron for k-Winners-Take-All Operation
    Liu, Qingshan
    Cao, Jinde
    Liang, Jinling
    ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 1, PROCEEDINGS, 2009, 5551 : 272 - +
  • [5] Analysis and Design of a k-Winners-Take-All Model With a Single State Variable and the Heaviside Step Activation Function
    Wang, Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (09): : 1496 - 1506
  • [6] A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the k-Winners-Take-All Problem
    Hu, Xiaolin
    Zhang, Bo
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (04): : 654 - 664
  • [7] A one-layer recurrent neural network with a discontinuous activation function for linear programming
    Liu, Qingshan
    Wang, Jun
    NEURAL COMPUTATION, 2008, 20 (05) : 1366 - 1383
  • [8] A Novel Recurrent Neural Network With One Neuron and Finite-Time Convergence for κ-Winners-Take-All Operation
    Liu, Qingshan
    Dang, Chuangyin
    Cao, Jinde
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (07): : 1140 - 1148
  • [9] Finite-Time Convergent Recurrent Neural Network with a Hard-Limiting Activation Function for Constrained Optimization with Piecewise-Linear Objective Functions
    Liu, Qingshan
    Wang, Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (04): : 601 - 613
  • [10] A One-Layer Dual Recurrent Neural Network with a Heaviside Step Activation Function for Linear Programming with Its Linear Assignment Application
    Liu, Qingshan
    Wang, Jun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT II, 2011, 6792 : 253 - +