Deep reinforcement learning for tuning active vibration control on a smart piezoelectric beam

被引:0
作者
Febvre, Maryne [1 ,2 ]
Rodriguez, Jonathan [1 ]
Chesne, Simon [1 ]
Collet, Manuel [2 ]
机构
[1] INSA Lyon, CNRS, UMR5259, LaMCoS, Villeurbanne, France
[2] Ecole Cent Lyon, CNRS, UMR5513, ENTPE,LTDS, Ecully, France
关键词
Active control; vibration control; feedback control; machine learning; neural network; cantilever beam; metamaterial; parameter estimation; smart structures; piezoelectric transducer; SYSTEM;
D O I
10.1177/1045389X241260976
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Piezoelectric transducers are used within smart structures to create functions such as energy harvesting, wave propagation or vibration control to prevent human discomfort, material fatigue, and instability. The design of the structure becomes more complex with shape optimization and the integration of multiple transducers. Most active vibration control strategies require the tuning of multiple parameters. In addition, the optimization of control methods has to consider experimental uncertainties and the global effect of local actuation. This paper presents the use of a Deep Reinforcement Learning (DRL) algorithm to tune a pseudo lead-lag controller on an experimental smart cantilever beam. The algorithm is trained to maximize a reward function that represents the objective of vibration mitigation. An experimental model is estimated from measurements to accelerate the DRL's interaction with the environment. The paper compares DRL tuning strategies with H 2 and H infinity norm minimization approaches. It demonstrates the efficiency of DRL tuning by comparing the control performance of the different tuning methods on the model and experimental setup.
引用
收藏
页码:1149 / 1165
页数:17
相关论文
共 40 条
[1]  
Cardon Dominique., 2018, Rseaux, P173, DOI [https://doi.org/10.3917/res.211.0173i, DOI 10.3917/RES.211.0173, 10.3917/res.211.0173]
[2]  
Chhabra D., 2015, Advances in Aerospace Engineering, V2015, P1, DOI 10.1155/2015/137068
[3]  
Curie P, 1984, Oeuvres de Pierre Curie / publ. par les soins de la socit franaise de physique
[4]   STATE-SPACE SOLUTIONS TO STANDARD H-2 AND H-INFINITY CONTROL-PROBLEMS [J].
DOYLE, JC ;
GLOVER, K ;
KHARGONEKAR, PP ;
FRANCIS, BA .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1989, 34 (08) :831-847
[5]  
Febvre M, 2023, PROCEEDINGS OF ASME 2023 CONFERENCE ON SMART MATERIALS, ADAPTIVE STRUCTURES AND INTELLIGENT SYSTEMS, SMASIS2023
[6]  
Foutsitzi G, 2003, 2003 INTERNATIONAL CONFERENCE PHYSICS AND CONTROL, VOLS 1-4, PROCEEDINGS, P157
[7]  
Friedland G., 2018, PRACTICAL APPROACH S, DOI [10.2172/1476219, DOI 10.2172/1476219]
[8]   VISUAL FEATURE EXTRACTION BY A MULTILAYERED NETWORK OF ANALOG THRESHOLD ELEMENTS [J].
FUKUSHIMA, K .
IEEE TRANSACTIONS ON SYSTEMS SCIENCE AND CYBERNETICS, 1969, SSC5 (04) :322-+
[9]   Reinforcement Learning to Control Lift Coefficient Using Distributed Sensors on a Wind Tunnel Model [J].
Guerra-Langan, A. ;
Araujo-Estrada, S. ;
Windsor, S. .
AIAA SCITECH 2022 FORUM, 2022,
[10]   Deep learning reduces sensor requirements for gust rejection on a small uncrewed aerial vehicle morphing wing [J].
Haughn, Kevin P. T. ;
Harvey, Christina ;
Inman, Daniel J. .
COMMUNICATIONS ENGINEERING, 2024, 3 (01)