A Learning Approach for Joint Design of Event-Triggered Control and Power-Efficient Resource Allocation

被引:4
作者
Termehchi, Atefeh [1 ]
Rasti, Mehdi [1 ]
机构
[1] Amirkabir Univ Technol, Dept Comp Engn, Tehran 1591634311, Iran
基金
芬兰科学院;
关键词
Ultra reliable low latency communication; Actuators; Resource management; Downlink; Delays; 5G mobile communication; Quality of service; Industrial cyber-physical system; hierarchical reinforcement learning; event-triggered control; power efficient network; radio resource allocation; STATE ESTIMATION; CO-DESIGN; SYSTEMS; COMMUNICATION; MAXIMIZATION; NETWORKING; ALGORITHM;
D O I
10.1109/TVT.2022.3159739
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In emerging Industrial Cyber-Physical Systems (ICPSs), the joint design of communication and control sub-systems is essential, as these sub-systems are interconnected. In this paper, we study the joint design problem of an event-triggered control and an energy-efficient resource allocation in a fifth generation (5 G) wireless network. We formally state the problem as a multi-objective optimization one, aiming to minimize the number of updates on the actuators' input and the power consumption in the downlink transmission. To address the problem, we propose a model-free hierarchical reinforcement learning approach with uniformly ultimate boundedness stability guarantee that learns four policies simultaneously. These policies contain an update time policy on the actuators' input, a control policy, and energy-efficient sub-carrier and power allocation policies. Our simulation results show that the proposed approach can properly control a simulated ICPS and significantly decrease the number of updates on the actuators' input as well as the downlink power consumption.
引用
收藏
页码:6322 / 6334
页数:13
相关论文
共 38 条
[1]  
[Anonymous], 2016, 5G NETW ARCH HIGH LE
[2]   Rollout Event-Triggered Control: Beyond Periodic Control Performance [J].
Antunes, D. ;
Heemels, W. P. M. H. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (12) :3296-3311
[3]  
Boyd S., 2004, CONVEX OPTIMIZATION
[4]  
Brockman Greg, 2016, arXiv
[5]   Cascade Attribute Network: Decomposing Reinforcement Learning Control Policies using Hierarchical Neural Networks [J].
Chang, Haonan ;
Xu, Zhuo ;
Tomizuka, Masayoshi .
IFAC PAPERSONLINE, 2020, 53 (02) :8181-8186
[6]  
Coumans E., 2021, PYBULLET PYTHON MODU
[7]   DEEPCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling [J].
Demirel, Burak ;
Ramaswamy, Arunselvan ;
Quevedo, Daniel E. ;
Karl, Holger .
IEEE CONTROL SYSTEMS LETTERS, 2018, 2 (04) :737-742
[8]   Event-Triggered Control for String-Stable Vehicle Platooning [J].
Dolk, Victor S. ;
Ploeg, Jeroen ;
Heemels, W. P. Maurice H. .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (12) :3486-3500
[9]   Control Aware Radio Resource Allocation in Low Latency Wireless Control Systems [J].
Eisen, Mark ;
Rashid, Mohammad M. ;
Gatsis, Konstantinos ;
Cavalcanti, Dave ;
Himayat, Nageen ;
Ribeiro, Alejandro .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (05) :7878-7890
[10]   Learning event-triggered control from data through joint optimization [J].
Funk, Niklas ;
Baumann, Dominik ;
Berenz, Vincent ;
Trimpe, Sebastian .
IFAC JOURNAL OF SYSTEMS AND CONTROL, 2021, 16