Guest Editorial Special Issue on Reinforcement Learning-Based Control: Data-Efficient and Resilient Methods

被引:0
作者
Gao, Weinan [1 ]
Li, Na [2 ]
Vamvoudakis, Kyriakos G. [3 ]
Yu, Fei Richard [4 ]
Jiang, Zhong-Ping [5 ]
机构
[1] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Liaoning, Peoples R China
[2] Harvard Univ, Sch Engn & Appl Sci, Allston, MA 02134 USA
[3] Georgia Inst Technol, Daniel Guggenheim Sch Aerosp Engn, Atlanta, GA 30332 USA
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
[5] NYU, Tandon Sch Engn, Brooklyn, NY 11201 USA
关键词
Special issues and sections; Reinforcement learning; Learning systems; Data integrity; Data models; Computer network management; Resilience;
D O I
10.1109/TNNLS.2024.3362092
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As an important branch of machine learning, reinforcement learning (RL) has proved its efficiency in many emerging applications in science and engineering. A remarkable advantage of RL is that it enables agents to maximize their cumulative rewards through online exploration and interactions with unknown (or partially unknown) and uncertain environments, which is regarded as a variant of data-driven adaptive optimal control methods. However, the successful implementation of RL-based control systems usually relies on a good quantity of online data due to its data-driven nature. Therefore, it is imperative to develop data-efficient RL methods for control systems to reduce the required number of interactions with the external environment. Moreover, network-aware issues, such as cyberattacks, dropout packet and communication latency, and actuator and sensor faults, are challenging conundrums that threaten the safety, security, stability, and reliability of network control systems. Consequently, it is significant to develop safe and resilient RL mechanisms.
引用
收藏
页码:3103 / 3106
页数:4
相关论文
共 22 条
[1]  
Bohn E., Coates E.M., Reinhardt D., Johansen T.A., Dataefficient deep reinforcement learning for attitude control of fixed-wing UAVs: Field experiments, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3168-3180, (2024)
[2]  
Li B., Chen N., Luo B., Chen J., Yang C., Gui W., ADPbased event-triggered constrained optimal control on spatiotemporal process: Application to temperature field in roller kiln, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3229-3241, (2024)
[3]  
Banerjee C., Chen Z., Noman N., Improved soft actor-critic: Mixing prioritized off-policy samples with on-policy experiences, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3121-3129, (2024)
[4]  
Wang X., Ding D., Ge X., Han Q.-L., Supplementary control for quantized discrete-time nonlinear systems under goal representation heuristic dynamic programming, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3202-3214, (2024)
[5]  
Li C., Ding J., Lewis F.L., Chai T., Model-free Q-learning for the tracking problem of linear discrete-time systems, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3191-3201, (2024)
[6]  
Gao X., Si J., Huang H., Reinforcement learning control with knowledge shaping, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3156-3167, (2024)
[7]  
Guo H., Peng Q., Cao Z., Jin Y., DRL-searcher: A unified approach to multirobot efficient search for a moving target, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3215-3228, (2024)
[8]  
Wang B., Xu L., Yi X., Jia Y., Yang T., Semiglobal suboptimal output regulation for heterogeneous multi-agent systems with input saturation via adaptive dynamic programming, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3242-3250, (2024)
[9]  
Jiang Y., Liu L., Feng G., Adaptive optimal control of networked nonlinear systems with stochastic sensor and actuator dropouts based on reinforcement learning, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3107-3120, (2024)
[10]  
Kokolakis N.T., Vamvoudakis K.G., Safety-aware pursuitevasion games in unknown environments using Gaussian processes and finite-time convergent reinforcement learning, IEEE Trans. Neural Netw. Learn. Syst, 35, 3, pp. 3130-3143, (2024)