Model Reduction in Linear Parameter-Varying Models using Autoencoder Neural Networks

被引:0
作者
Rizvi, Syed Z. [1 ]
Abbasi, Farshid [1 ]
Velni, Javad Mohammadpour [1 ]
机构
[1] Univ Georgia, Coll Engn, Sch Elect & Comp Engn, Athens, GA 30602 USA
来源
2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC) | 2018年
关键词
H-INFINITY CONTROL; LPV; DESIGN; SET;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a method for model reduction of systems represented by linear parameter-varying (LPV) models seeking to reduce the scheduling variables space by employing autoencoder (AE) neural networks. The reduction of scheduling variables results in an exponential decrease in computational complexity for gain-scheduled LPV controller synthesis. Autoencoders rely on minimizing regularized sparse least-squares cost function that seeks to fit scheduling variables reproduced from the new lower-dimensional variables to the original measurements. This way, unlike other unsupervised nonlinear reduction methods, AEs do not require separately solving for a pre-image in the original scheduling space. Moreover, unlike principal component analysis (PCA), AEs can employ nonlinear encoding, and are thus suitable for LPV models. When needed, AEs can add multiple layers for encoding and decoding, thus enabling deep learning methods for the purpose of model reduction. A case study of a mechanical system is considered and results are compared with linear dimensionality reduction techniques. The reduced model is used for H (infinity) controller synthesis and results are compared.
引用
收藏
页码:6415 / 6420
页数:6
相关论文
共 14 条