Using Q-learning and genetic algorithms to improve the efficiency of weight adjustments for optimal control and design problems

被引:7
作者
Kamali, Kaivan [1 ]
Jiang, L. J. [2 ]
Yen, John [1 ]
Wang, K. W. [2 ]
机构
[1] Penn State Univ, Coll Informat Sci & Technol, Lab Intelligent Agents, University Pk, PA 16802 USA
[2] Penn State Univ, Dept Mech & Nucl Engn, Struct Dynam & Control Lab, University Pk, PA 16802 USA
关键词
optimal control; weight selection; Q-learning; genetic algorithms;
D O I
10.1115/1.2739502
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In traditional optimal control and design problems, the control gains and design parameters are usually derived to minimize a cost function reflecting the system performance and control effort. One major challenge of such approaches is the selection of weighting matrices in the cost function, which are usually determined via trial-and-error and human intuition. While various techniques have been proposed to automate the weight selection process, they either can not address complex design problems or suffer from slow convergence rate and high computational costs. We propose a layered approach based on Q-learning, a reinforcement learning technique, on top of genetic algorithms (GA) to determine the best weightings for optimal control and design problems. The layered approach allows for reuse of knowledge. Knowledge obtained via Q-learning in a design problem can be used to speed up the convergence rate of a similar design problem. Moreover the layered approach allows for solving optimizations that cannot be solved by GA alone. To test the proposed method, we perform numerical experiments on a sample active-passive hybrid vibration control problem, namely adaptive structures with active-passive hybrid piezoelectric networks. These numerical experiments show that the proposed Q-learning scheme is a promising approach for automation of weight selection for complex design problems.
引用
收藏
页码:302 / 308
页数:7
相关论文
共 50 条
[21]   Optimal operational control for industrial processes based on Q-learning method [J].
Li, Jinna ;
Gao, Xize ;
Yuan, Decheng ;
Fan, Jialu .
PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, :2562-2567
[22]   Adaptive traffic signal control using deep Q-learning: case study on optimal implementations [J].
Pan, Guangyuan ;
Muresan, Matthew ;
Fu, Liping .
CANADIAN JOURNAL OF CIVIL ENGINEERING, 2023, 50 (06) :488-497
[23]   Unified reinforcement Q-learning for mean field game and control problems [J].
Andrea Angiuli ;
Jean-Pierre Fouque ;
Mathieu Laurière .
Mathematics of Control, Signals, and Systems, 2022, 34 :217-271
[24]   Unified reinforcement Q-learning for mean field game and control problems [J].
Angiuli, Andrea ;
Fouque, Jean-Pierre ;
Lauriere, Mathieu .
MATHEMATICS OF CONTROL SIGNALS AND SYSTEMS, 2022, 34 (02) :217-271
[25]   Safe Q-Learning for Data-Driven Nonlinear Optimal Control with Asymmetric State Constraints [J].
Zhao, Mingming ;
Wang, Ding ;
Song, Shijie ;
Qiao, Junfei .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (12) :2408-2422
[26]   Optimal scheduling in cloud healthcare system using Q-learning algorithm [J].
Yafei Li ;
Hongfeng Wang ;
Na Wang ;
Tianhong Zhang .
Complex & Intelligent Systems, 2022, 8 :4603-4618
[27]   Optimal Electric Vehicle Battery Management Using Q-learning for Sustainability [J].
Suanpang, Pannee ;
Jamjuntr, Pitchaya .
SUSTAINABILITY, 2024, 16 (16)
[28]   Optimal scheduling in cloud healthcare system using Q-learning algorithm [J].
Li, Yafei ;
Wang, Hongfeng ;
Wang, Na ;
Zhang, Tianhong .
COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (06) :4603-4618
[29]   Genetic algorithms for optimal design and control of adaptive structures [J].
Ribeiro, R ;
Silva, SD ;
Rodrigues, JD ;
Vaz, M .
SMART STRUCTURES AND MATERIALS 2000: MATHEMATICS AND CONTROL IN SMART STRUCTURES, 2000, 3984 :268-278
[30]   Optimal Tracking Current Control of Switched Reluctance Motor Drives Using Reinforcement Q-Learning Scheduling [J].
Alharkan, Hamad ;
Saadatmand, Sepehr ;
Ferdowsi, Mehdi ;
Shamsi, Pourya .
IEEE ACCESS, 2021, 9 :9926-9936