Deep reinforcement learning based voltage regulation in edge computing paradigm for PV rich distribution networks

被引:0
|
作者
Li, Chang [1 ]
Li, Yong [1 ,2 ]
Liu, Jiayan [1 ]
Kleemann, Michael [3 ]
Xie, Liwei [2 ,5 ]
Peng, Jing [2 ,6 ]
Xu, Jie [2 ,6 ]
Wang, Can [4 ]
Cao, Yijia [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] Key Lab Satellite Nav Technol, Changsha 410073, Peoples R China
[3] Katholieke Univ Leuven, Dept ESAT, B-9000 Ghent, Belgium
[4] Dept Elect Engn State Grid Hunan Elect Power Co Lt, Dept Elect Engn, Changsha 410007, Peoples R China
[5] Changsha Univ Sci & Technol, Coll Elect & Informat Engn, Changsha 410114, Peoples R China
[6] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
关键词
Volt-VAR control; Deep reinforcement learning; Distribution network; Edge computing; PENETRATION; INVERTER;
D O I
10.1016/j.epsr.2024.111159
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As the penetration of renewable energy in distribution networks continues to rise, the reverse power flow caused by peak outputs from photovoltaic (PV) generation is increasingly leading to voltage limit violations. These issues require mitigation through effective voltage regulation strategies. Previous studies have typically modeled the coordinated operation of PV inverters as optimization problems or neural network computation problems, but they often overlooked the consideration of the overall computing system's performance. In this study, we propose a low-power edge computing voltage regulation framework by integrating a gated control policy network, value evaluation function, and a gated Markov decision process. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed onto the next time step at each time interval. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed on to the next time step at each interval. Compared to traditional DDPG, this architecture demonstrates superior performance in both training convergence and voltage regulation. This edge computing voltage regulation framework, when tested on an IEEE distribution feeder with real historical data, has been shown to use less than 20% of the total resources of the Transformer Terminal Unit (TTU), making it feasible to deploy on edge devices with limited computational capacity.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A Multi-Agent Deep Reinforcement Learning Based Voltage Regulation Using Coordinated PV Inverters
    Cao, Di
    Hu, Weihao
    Zhao, Junbo
    Huang, Qi
    Chen, Zhe
    Blaabjerg, Frede
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (05) : 4120 - 4123
  • [22] Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks
    Qiao, Guanhua
    Leng, Supeng
    Maharjan, Sabita
    Zhang, Yan
    Ansari, Nirwan
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (01): : 247 - 257
  • [23] Task offloading in vehicular edge computing networks via deep reinforcement learning
    Karimi, Elham
    Chen, Yuanzhu
    Akbari, Behzad
    COMPUTER COMMUNICATIONS, 2022, 189 : 193 - 204
  • [24] Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks
    Liu, Yi
    Yu, Huimin
    Xie, Shengli
    Zhang, Yan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (11) : 11158 - 11168
  • [25] Task Assignment in Mobile Edge Computing Networks: A Deep Reinforcement Learning Approach
    Feng, Mingjie
    Zhao, Qi
    Sullivan, Nichole
    Chen, Genshe
    Pham, Khanh
    Blasch, Erik
    SENSORS AND SYSTEMS FOR SPACE APPLICATIONS XIV, 2021, 11755
  • [26] Deep Reinforcement Learning based Service Migration Strategy for Edge Computing
    Gao, Zhipeng
    Jiao, Qidong
    Xiao, Kaile
    Wang, Qian
    Mo, Zijia
    Yang, Yang
    2019 13TH IEEE INTERNATIONAL CONFERENCE ON SERVICE-ORIENTED SYSTEM ENGINEERING (SOSE) / 10TH INTERNATIONAL WORKSHOP ON JOINT CLOUD COMPUTING (JCC) / IEEE INTERNATIONAL WORKSHOP ON CLOUD COMPUTING IN ROBOTIC SYSTEMS (CCRS), 2019, : 116 - 121
  • [27] Deep Reinforcement Learning and Optimization Based Green Mobile Edge Computing
    Yang, Yang
    Hu, Yulin
    Gursoy, M. Cenk
    2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,
  • [28] Computation offloading Optimization in Edge Computing based on Deep Reinforcement Learning
    Zhu Qinghua
    Chang Ying
    Zhao Jingya
    Liu Yong
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 1552 - 1558
  • [29] Dependent Task Offloading for Edge Computing based on Deep Reinforcement Learning
    Wang, Jin
    Hu, Jia
    Min, Geyong
    Zhan, Wenhan
    Zomaya, Albert Y.
    Georgalas, Nektarios
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (10) : 2449 - 2461
  • [30] Resource Allocation Based on Deep Reinforcement Learning in IoT Edge Computing
    Xiong, Xiong
    Zheng, Kan
    Lei, Lei
    Hou, Lu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (06) : 1133 - 1146