Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter

被引:91
作者
Fu, Xingang [1 ]
Li, Shuhui [1 ]
Fairbank, Michael [2 ]
Wunsch, Donald C. [3 ]
Alonso, Eduardo [2 ]
机构
[1] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
[2] City Univ London, Sch Math Comp Sci & Engn, London EC1V 0HB, England
[3] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO 65409 USA
基金
美国国家科学基金会;
关键词
Backpropagation through time (BPTT); d-q vector control; dynamic programming (DP); forward accumulation through time (FATT); grid-connected converter (GCC); Jacobian matrix; Levenberg Marquardt (LM); optimal control; recurrent neural network (RNN); CURRENT VECTOR CONTROL; BACKPROPAGATION;
D O I
10.1109/TNNLS.2014.2361267
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg Marquardt (LM) algorithm as well as how to implement optimal control of a gridconnected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to reallife power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
引用
收藏
页码:1900 / 1912
页数:13
相关论文
共 37 条
[1]   DYNAMIC PROGRAMMING [J].
BELLMAN, R .
SCIENCE, 1966, 153 (3731) :34-&
[2]   Digital Control of Actual Grid-Connected Converters for Ground Leakage Current Reduction in PV Transformerless Systems [J].
Buticchi, Giampaolo ;
Barater, Davide ;
Lorenzani, Emilio ;
Franceschini, Giovanni .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2012, 8 (03) :563-572
[3]   Weight groupings in the training of recurrent networks [J].
Chan, LW ;
Szeto, CC .
IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL III, 2000, :21-26
[4]  
Chan LW, 1999, IJCNN 99, V3, P1521
[5]   An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances [J].
Fairbank, Michael ;
Li, Shuhui ;
Fu, Xingang ;
Alonso, Eduardo ;
Wunsch, Donald .
NEURAL NETWORKS, 2014, 49 :74-86
[6]   Efficient Calculation of the Gauss-Newton Approximation of the Hessian Matrix in Neural Networks [J].
Fairbank, Michael ;
Alonso, Eduardo .
NEURAL COMPUTATION, 2012, 24 (03) :607-610
[7]  
Franklin G. F., 1998, Digital Control of Dynamic Systems
[8]  
Hagan M. T., 2002, NEURAL NETWORK DESIG, P19
[9]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993
[10]  
Haykin S., 2004, Neural Network, V2004, P41