Reduced-order identification methods: Hierarchical algorithm or variable elimination algorithm

被引:0
作者
Chen, Jing [1 ]
Mao, Yawen [1 ]
Wang, Dongqing [2 ]
Gan, Min [3 ]
Zhu, Quanmin [4 ]
Liu, Feng [5 ]
机构
[1] Jiangnan Univ, Sch Sci, Wuxi 214122, Peoples R China
[2] Qingdao Univ, Coll Elect Engn, Qingdao 266071, Peoples R China
[3] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Peoples R China
[4] Univ West England, Dept Engn Design & Math, Bristol BS16 1QY, England
[5] Stevens Inst Technol, Sch Syst & Enterprises, Hoboken, NJ 07030 USA
基金
中国国家自然科学基金;
关键词
Reduced-order algorithm; Least square algorithm; Gradient descent algorithm; Condition number; Hierarchical identification algorithm; Variable elimination algorithm; SYSTEM-IDENTIFICATION;
D O I
10.1016/j.automatica.2024.111991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reduced-order identification algorithms are usually used in machine learning and big data technologies, where the large-scale systems widely exist. For large-scale system identification, traditional least squares algorithm involves high-order matrix inverse calculation, while traditional gradient descent algorithm has slow convergence rates. The reduced-order algorithm proposed in this paper has some advantages over the previous work: (1) via sequential partitioning of the parameter vector, the calculation of the inverse of a high-order matrix can be reduced to low-order matrix inverse calculations; (2) has a better conditioned information matrix than that of the gradient descent algorithm, thus has faster convergence rates; (3) its convergence rates can be increased by using the Aitken acceleration method, therefore the reduced-order based Aitken algorithm is at least quadratic convergent and has no limitation on the step-size. The properties of the reduced-order algorithm are also given. Simulation results demonstrate the effectiveness of the proposed algorithm. (c) 2024 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
引用
收藏
页数:14
相关论文
共 27 条
[1]  
Ba J, 2014, ACS SYM SER
[2]   Iterative pre-conditioning for expediting the distributed gradient-descent method: The case of linear least-squares problem [J].
Chakrabarti, Kushal ;
Gupta, Nirupam ;
Chopra, Nikhil .
AUTOMATICA, 2022, 137
[3]   A simple robust method of fractional time-delay estimation for linear dynamic systems [J].
Chen, Fengwei ;
Young, Peter C. .
AUTOMATICA, 2022, 137
[4]   Insights Into Algorithms for Separable Nonlinear Least Squares Problems [J].
Chen, Guang-Yong ;
Gan, Min ;
Wang, Shuqiang ;
Chen, C. L. Philip .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :1207-1218
[5]   Multidirection Gradient Iterative Algorithm: A Unified Framework for Gradient Iterative and Least Squares Algorithms [J].
Chen, Jing ;
Ma, Junxia ;
Gan, Min ;
Zhu, Quanmin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (12) :6770-6777
[6]   Robust Standard Gradient Descent Algorithm for ARX Models Using Aitken Acceleration Technique [J].
Chen, Jing ;
Gan, Min ;
Zhu, Quanmin ;
Narayan, Pritesh ;
Liu, Yanjun .
IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (09) :9646-9655
[7]   Interval Error Correction Auxiliary Model Based Gradient Iterative Algorithms for Multirate ARX Models [J].
Chen, Jing ;
Ding, Feng ;
Zhu, Quanmin ;
Liu, Yanjun .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (10) :4385-4392
[8]   Nuisance Parameter Estimation Algorithms for Separable Nonlinear Models [J].
Chen, Long ;
Chen, Jia-Bing ;
Chen, Guang-Yong ;
Gan, Min ;
Chen, C. L. Philip .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (11) :7236-7247
[9]   System Identification Via Sparse Multiple Kernel-Based Regularization Using Sequential Convex Optimization Techniques [J].
Chen, Tianshi ;
Andersen, Martin S. ;
Ljung, Lennart ;
Chiuso, Alessandro ;
Pillonetto, Gianluigi .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (11) :2933-2945
[10]  
Cheney E.W., 2007, NUMERICAL MATH COMPU