Deep reinforcement learning based voltage regulation in edge computing paradigm for PV rich distribution networks

被引:0
|
作者
Li, Chang [1 ]
Li, Yong [1 ,2 ]
Liu, Jiayan [1 ]
Kleemann, Michael [3 ]
Xie, Liwei [2 ,5 ]
Peng, Jing [2 ,6 ]
Xu, Jie [2 ,6 ]
Wang, Can [4 ]
Cao, Yijia [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] Key Lab Satellite Nav Technol, Changsha 410073, Peoples R China
[3] Katholieke Univ Leuven, Dept ESAT, B-9000 Ghent, Belgium
[4] Dept Elect Engn State Grid Hunan Elect Power Co Lt, Dept Elect Engn, Changsha 410007, Peoples R China
[5] Changsha Univ Sci & Technol, Coll Elect & Informat Engn, Changsha 410114, Peoples R China
[6] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
关键词
Volt-VAR control; Deep reinforcement learning; Distribution network; Edge computing; PENETRATION; INVERTER;
D O I
10.1016/j.epsr.2024.111159
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As the penetration of renewable energy in distribution networks continues to rise, the reverse power flow caused by peak outputs from photovoltaic (PV) generation is increasingly leading to voltage limit violations. These issues require mitigation through effective voltage regulation strategies. Previous studies have typically modeled the coordinated operation of PV inverters as optimization problems or neural network computation problems, but they often overlooked the consideration of the overall computing system's performance. In this study, we propose a low-power edge computing voltage regulation framework by integrating a gated control policy network, value evaluation function, and a gated Markov decision process. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed onto the next time step at each time interval. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed on to the next time step at each interval. Compared to traditional DDPG, this architecture demonstrates superior performance in both training convergence and voltage regulation. This edge computing voltage regulation framework, when tested on an IEEE distribution feeder with real historical data, has been shown to use less than 20% of the total resources of the Transformer Terminal Unit (TTU), making it feasible to deploy on edge devices with limited computational capacity.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks
    Qi, Fan
    Li Zhuo
    Chen Xin
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 835 - 840
  • [2] A Reinforcement Learning Based Voltage Regulation Strategy for Active Distribution Networks
    Wang, Can
    Li, Chang
    Li, Yong
    Liu, Jiayan
    Ling, Feng
    Liu, Qi
    2023 2ND ASIAN CONFERENCE ON FRONTIERS OF POWER AND ENERGY, ACFPE, 2023, : 50 - 54
  • [3] Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks
    Li, Mushu
    Gao, Jie
    Zhao, Lian
    Shen, Xuemin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2020, 6 (04) : 1122 - 1135
  • [4] Learning IoV in Edge: Deep Reinforcement Learning for Edge Computing Enabled Vehicular Networks
    Xu, Shilin
    Guo, Caili
    Hu, Rose Qingyang
    Qian, Yi
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [5] Using Q-Learning for OLTC Voltage Regulation in PV-Rich Distribution Networks
    Custodio, Guilherme
    Ochoa, Luis F.
    Trindade, F. C. L.
    Alpcan, Tansu
    2020 INTERNATIONAL CONFERENCE ON SMART GRIDS AND ENERGY SYSTEMS (SGES 2020), 2020, : 482 - 487
  • [6] A Deep Reinforcement Learning Scheme for SCMA-Based Edge Computing in IoT Networks
    Liu, Pengtao
    Lei, Jing
    Liu, Wei
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5044 - 5049
  • [7] A Power Allocation Algorithm in Vehicular Edge Computing Networks Based on Deep Reinforcement Learning
    Qiu B.
    Wang Y.
    Xiao H.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2024, 47 (02): : 81 - 89
  • [8] Deep reinforcement learning based topology-aware voltage regulation of distribution networks with distributed energy storage
    Xiang, Yue
    Lu, Yu
    Liu, Junyong
    APPLIED ENERGY, 2023, 332
  • [9] A Deep Reinforcement Learning Based Offloading Game in Edge Computing
    Zhan, Yufeng
    Guo, Song
    Li, Peng
    Zhang, Jiang
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (06) : 883 - 893
  • [10] Computation Offloading in Edge Computing Based on Deep Reinforcement Learning
    Li, MingChu
    Mao, Ning
    Zheng, Xiao
    Gadekallu, Thippa Reddy
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION NETWORKS (ICCCN 2021), 2022, 394 : 339 - 353