Rethinking Safe Policy Learning for Complex Constraints Satisfaction: A Glimpse in Real-Time Security Constrained Economic Dispatch Integrating Energy Storage Units

被引:3
作者
Hu, Jianxiong [1 ]
Ye, Yujian [1 ]
Wu, Yizhi [1 ]
Zhao, Peilin [2 ]
Liu, Liu [2 ]
机构
[1] Southeast Univ, Sch Elect Engn, Nanjing 210096, Peoples R China
[2] Tencent AI Lab, Shenzhen 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Safety; Costs; Real-time systems; Uncertainty; Security; Energy storage; Indexes; Data-driven methods; energy storage; real-time security constrained economic dispatch; safety policy learning; OPTIMAL POWER-FLOW; REINFORCEMENT;
D O I
10.1109/TPWRS.2024.3419894
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Reinforcement learning (RL) for real-time security constrained economic dispatch (RT-SCED) problems have been the subject of significant research interest in recent years. However, ordinary RL approaches struggle to ensure satisfaction of system- and device-wise constraints, having to penalize constraint violations individually. With increasing penetration of renewable energy sources, large-scale energy storage integration is witnessed, driven by their ability to mitigate RES intermittency. This gives rise to the need of time-coupling constraint satisfaction in the RT-SCED problems. Existing safe RL methods either rectify unsafe actions at each time step with a safety layer, which may lead to sub-optimal actions devised at the boundary of feasible space, and may violate time-coupling constraints; or construct a safety evaluation model, which may violate single-step constraints. To address these limitations, this paper proposes a novel safe deep RL method, featuring safety exploration and safety optimization modules, facilitating comprehensive satisfaction of single-step and time-coupling constraints. Furthermore, the policy network features a residual network architecture and allows direct computation of real-value dispatch of all controllable resources, adapting to their distinct power output ranges. Case studies validate the effectiveness of the proposed method in cost efficiency, operational security, computational and scalability performance, compared to state-of-the-art model-driven and data-driven baseline methods, on the IEEE 39-bus and 118-bus test systems.
引用
收藏
页码:1091 / 1104
页数:14
相关论文
共 36 条
[1]   Deep Reinforcement Learning Based Approach for Optimal Power Flow of Distribution Networks Embedded with Renewable Energy and Storage Devices [J].
Cao, Di ;
Hu, Weihao ;
Xu, Xiao ;
Wu, Qiuwei ;
Huang, Qi ;
Chen, Zhe ;
Blaabjerg, Frede .
JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY, 2021, 9 (05) :1101-1110
[2]   A computationally efficient mixed-integer linear formulation for the thermal unit commitment problem [J].
Carrion, Miguel ;
Arroyo, Jose M. .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2006, 21 (03) :1371-1378
[3]   Multi-Agent Reinforcement Learning for Decentralized Resilient Secondary Control of Energy Storage Systems Against DoS Attacks [J].
Chen, Pengcheng ;
Liu, Shichao ;
Chen, Bo ;
Yu, Li .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (03) :1739-1750
[4]   Target-Value-Competition-Based Multi-Agent Deep Reinforcement Learning Algorithm for Distributed Nonconvex Economic Dispatch [J].
Ding, Lifu ;
Lin, Zhiyun ;
Shi, Xiasheng ;
Yan, Gangfeng .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (01) :204-217
[5]   Bridging Chance-Constrained and Robust Optimization in an Emission-Aware Economic Dispatch With Energy Storage [J].
Gu, Nan ;
Wang, Haoxiang ;
Zhang, Jiasheng ;
Wu, Chenye .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2022, 37 (02) :1078-1090
[6]   Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning [J].
Guo, Chenyu ;
Wang, Xin ;
Zheng, Yihui ;
Zhang, Feng .
ENERGY, 2022, 238
[7]   An autonomous control technology based on deep reinforcement learning for optimal active power dispatch [J].
Han, Xiaoyun ;
Mu, Chaoxu ;
Yan, Jun ;
Niu, Zeyuan .
INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2023, 145
[8]   On the Feasibility Guarantees of Deep Reinforcement Learning Solutions for Distribution System Operation [J].
Hosseini, Mohammad Mehdi ;
Parvania, Masood .
IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (02) :954-964
[9]   Towards Risk-Aware Real-Time Security Constrained Economic Dispatch: A Tailored Deep Reinforcement Learning Approach [J].
Hu, Jianxiong ;
Ye, Yujian ;
Tang, Yi ;
Strbac, Goran .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) :3972-3986
[10]   A Novel Genetic Algorithm Based Dynamic Economic Dispatch With Short-Term Load Forecasting [J].
Kalakova, Aidana ;
Nunna, H. S. V. S. Kumar ;
Jamwal, Prashant K. ;
Doolla, Suryanarayana .
IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2021, 57 (03) :2972-2982