Distributed neural network control with dependability guarantees: a compositional port-Hamiltonian approach

被引:0
|
作者
Furieri, Luca [1 ]
Galimberti, Clara Lucia [1 ]
Zakwan, Muhammad [1 ]
Ferrari-Trecate, Giancarlo [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Inst Mech Engn, Lausanne, Switzerland
来源
LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 168 | 2022年 / 168卷
关键词
optimal distributed control; deep learning; port-Hamiltonian systems; neural ODEs;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
(1)Large-scale cyber-physical systems require that control policies are distributed, that is, that they only rely on local real-time measurements and communication with neighboring agents. Optimal Distributed Control (ODC) problems are, however, highly intractable even in seemingly simple cases. Recent work has thus proposed training Neural Network (NN) distributed controllers. A main challenge of NN controllers is that they are not dependable during and after training, that is, the closed-loop system may be unstable, and the training may fail due to vanishing gradients. In this paper, we address these issues for networks of nonlinear port-Hamiltonian (pH) systems, whose modeling power ranges from energy systems to non-holonomic vehicles and chemical reactions. Specifically, we embrace the compositional properties of pH systems to characterize deep Hamiltonian control policies with built-in closed-loop stability guarantees-irrespective of the interconnection topology and the chosen NN parameters. Furthermore, our setup enables leveraging recent results on well-behaved neural ODEs to prevent the phenomenon of vanishing gradients by design. Numerical experiments corroborate the dependability of the proposed architecture, while matching the performance of general neural network policies.
引用
收藏
页数:13
相关论文
共 50 条