Gradient Neural Network with Nonlinear Activation for Computing Inner Inverses and the Drazin Inverse

被引:35
作者
Stanimirovic, Predrag S. [1 ]
Petkovic, Marko D. [1 ]
Gerontitis, Dimitrios [2 ]
机构
[1] Univ Nis, Fac Sci & Math, Visegradska 33, Nish 18000, Serbia
[2] Aristotele Panepistim, Thessalonikis, Greece
关键词
Recurrent neural network; Moore-Penrose inverse; Drazin inverse; Dynamic equation; Activation function; ZNN MODELS; CONVERGENCE; DYNAMICS;
D O I
10.1007/s11063-017-9705-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Two gradient-based recurrent neural networks (GNNs) for solving two matrix equations are presented and investigated. These GNNs can be used for generating various inner inverses, including the Moore-Penrose, and in the computation of the Drazin inverse. Convergence properties of defined GNNs are considered. Certain conditions which impose convergence towards the pseudoinverse, and the Drazin inverse are exactly specified. The influence of nonlinear activation functions on the convergence performance of defined GNN models is investigated. Computer simulation experience further confirms the theoretical results.
引用
收藏
页码:109 / 133
页数:25
相关论文
共 30 条
  • [21] Gradient-based Neural Network for Online Solution of Lyapunov Matrix Equation with Li Activation Function
    Wang, Shiheng
    Dai, Shidong
    Wang, Ke
    PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND MANAGEMENT INNOVATION, 2015, 28 : 955 - 959
  • [22] Solving Quadratic Minimization Problem by Finite-Time Recurrent Neural Network Using Two Different Nonlinear Activation Functions
    Zhang, Yongsheng
    Xiao, Lin
    Liao, Bolin
    Ding, Lei
    Lu, Rongbo
    PROCEEDINGS OF 2018 TENTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2018, : 151 - 155
  • [23] Activation functions of artificial-neural-network-based nonlinear equalizers for optical nonlinearity compensation
    Miyashita, Yuki
    Kyono, Takeru
    Ikuta, Kai
    Kurokawa, Yuichiro
    Nakamura, Moriya
    IEICE COMMUNICATIONS EXPRESS, 2021, 10 (08): : 558 - 563
  • [24] A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function
    Yu, Fei
    Liu, Li
    Xiao, Lin
    Li, Kenli
    Cai, Shuo
    NEUROCOMPUTING, 2019, 350 : 108 - 116
  • [25] Gradient Descent-Barzilai Borwein-Based Neural Network Tracking Control for Nonlinear Systems With Unknown Dynamics
    Wang, Yujia
    Wang, Tong
    Yang, Xuebo
    Yang, Jiae
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 305 - 315
  • [26] Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Network
    Dinh, Thu
    Xin, Jack
    FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS, 2020, 6
  • [27] FPGA implementation and image encryption application of a new PRNG based on a memristive Hopfield neural network with a special activation gradient
    Yu, Fei
    Zhang, Zinan
    Shen, Hui
    Huang, Yuanyuan
    Cai, Shuo
    Du, Sichun
    CHINESE PHYSICS B, 2022, 31 (02)
  • [28] A nonlinear zeroing neural network and its applications on time-varying linear matrix equations solving, electronic circuit currents computing and robotic manipulator trajectory tracking
    Jin, Jie
    Chen, Weijie
    Zhao, Lv
    Chen, Long
    Tang, Zhijun
    COMPUTATIONAL & APPLIED MATHEMATICS, 2022, 41 (07)
  • [29] Adaptive Coefficient Designs for Nonlinear Activation Function and Its Application to Zeroing Neural Network for Solving Time-Varying Sylvester Equation
    Jian, Zhen
    Xiao, Lin
    Li, Kenli
    Zuo, Qiuyue
    Zhang, Yongsheng
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2020, 357 (14): : 9909 - 9929
  • [30] Discrete gradient-zeroing neural network algorithm for solving future Sylvester equation aided with left-right four-step rule as well as robot arm inverse kinematics
    Guo, Pengfei
    Zhang, Yunong
    Yao, Zheng-an
    MATHEMATICS AND COMPUTERS IN SIMULATION, 2025, 233 : 475 - 501