Large Scale Nonlinear Control System Fine-Tuning Through Learning

被引:53
作者
Kosmatopoulos, Elias B. [1 ]
Kouvelas, Anastasios [1 ]
机构
[1] Tech Univ Crete, Dynam Syst & Simulat Lab, Dept Prod & Management Engn, Khania 73100, Greece
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2009年 / 20卷 / 06期
关键词
Adaptive fine-tuning; adaptive optimization; incremental-extreme learning machine neural networks (I-ELM-NNs); nonlinear control systems; simultaneous perturbation stochastic approximation (SPSA); SWITCHING ADAPTIVE-CONTROL; STOCHASTIC-APPROXIMATION; NETWORKS;
D O I
10.1109/TNN.2009.2014061
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the continuous advances in the fields of intelligent control and computing, the design and deployment of efficient large scale nonlinear control systems (LNCSs) requires a tedious fine-tuning of the LNCS parameters before and during the actual system operation. In the majority of LNCSs the fine-tuning process is performed by experienced personnel based on field observations via experimentation with different combinations of controller parameters, without the use of a systematic approach. The existing adaptive/neural/fuzzy control methodologies cannot be used towards the development of a systematic, automated fine-tuning procedure for general LNCS due to the strict assumptions they impose on the controlled system dynamics; on the other hand, adaptive optimization methodologies fail to guarantee an efficient and safe performance during the fine-tuning process, mainly due to the fact that these methodologies involve the use of random perturbations. In this paper, we introduce and analyze, both by means of mathematical arguments and simulation experiments, a new learning/adaptive algorithm that can provide with convergent, an efficient and safe fine-tuning of general LNCS. The proposed algorithm consists of a combination of two different algorithms proposed by Kosmatopoulos et al. (2007 and 2008) and the incremental-extreme learning machine neural networks (I-ELM-NNs). Among the nice properties of the proposed algorithm is that it significantly outperforms the algorithms proposed by Kosmatopoulos et al as well as other existing adaptive optimization algorithms. Moreover, contrary to the algorithms proposed by Kosmatopoulos et al, the proposed algorithm can operate efficiently in the case where the exogenous system inputs (e.g., disturbances, commands, demand, etc.) are unbounded signals.
引用
收藏
页码:1009 / 1023
页数:15
相关论文
共 25 条
[1]  
[Anonymous], 1995, Stable and Robust Adaptive Control
[2]  
[Anonymous], 1995, NONLINEAR ADAPTIVE C
[3]   Gradient convergence in gradient methods with errors [J].
Bertsekas, DP ;
Tsitsiklis, JN .
SIAM JOURNAL ON OPTIMIZATION, 2000, 10 (03) :627-642
[4]   MULTIDIMENSIONAL STOCHASTIC APPROXIMATION METHODS [J].
BLUM, JR .
ANNALS OF MATHEMATICAL STATISTICS, 1954, 25 (04) :737-744
[5]   Weighted means in stochastic approximation of minima [J].
Dippon, J ;
Renz, J .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1997, 35 (05) :1811-1827
[6]   Convex incremental extreme learning machine [J].
Huang, Guang-Bin ;
Chen, Lei .
NEUROCOMPUTING, 2007, 70 (16-18) :3056-3062
[7]   Universal approximation using incremental constructive feedforward networks with random hidden nodes [J].
Huang, Guang-Bin ;
Chen, Lei ;
Siew, Chee-Kheong .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :879-892
[8]  
Karush William, 2014, MINIMA FUNCTIONS SEV
[9]   STOCHASTIC ESTIMATION OF THE MAXIMUM OF A REGRESSION FUNCTION [J].
KIEFER, J ;
WOLFOWITZ, J .
ANNALS OF MATHEMATICAL STATISTICS, 1952, 23 (03) :462-466
[10]   Robust switching adaptive control of multi-input nonlinear systems [J].
Kosmatopoulos, EB ;
Ioannou, PA .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2002, 47 (04) :610-624