Deep reinforcement learning based voltage regulation in edge computing paradigm for PV rich distribution networks

被引:0
|
作者
Li, Chang [1 ]
Li, Yong [1 ,2 ]
Liu, Jiayan [1 ]
Kleemann, Michael [3 ]
Xie, Liwei [2 ,5 ]
Peng, Jing [2 ,6 ]
Xu, Jie [2 ,6 ]
Wang, Can [4 ]
Cao, Yijia [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] Key Lab Satellite Nav Technol, Changsha 410073, Peoples R China
[3] Katholieke Univ Leuven, Dept ESAT, B-9000 Ghent, Belgium
[4] Dept Elect Engn State Grid Hunan Elect Power Co Lt, Dept Elect Engn, Changsha 410007, Peoples R China
[5] Changsha Univ Sci & Technol, Coll Elect & Informat Engn, Changsha 410114, Peoples R China
[6] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
关键词
Volt-VAR control; Deep reinforcement learning; Distribution network; Edge computing; PENETRATION; INVERTER;
D O I
10.1016/j.epsr.2024.111159
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As the penetration of renewable energy in distribution networks continues to rise, the reverse power flow caused by peak outputs from photovoltaic (PV) generation is increasingly leading to voltage limit violations. These issues require mitigation through effective voltage regulation strategies. Previous studies have typically modeled the coordinated operation of PV inverters as optimization problems or neural network computation problems, but they often overlooked the consideration of the overall computing system's performance. In this study, we propose a low-power edge computing voltage regulation framework by integrating a gated control policy network, value evaluation function, and a gated Markov decision process. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed onto the next time step at each time interval. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed on to the next time step at each interval. Compared to traditional DDPG, this architecture demonstrates superior performance in both training convergence and voltage regulation. This edge computing voltage regulation framework, when tested on an IEEE distribution feeder with real historical data, has been shown to use less than 20% of the total resources of the Transformer Terminal Unit (TTU), making it feasible to deploy on edge devices with limited computational capacity.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Reinforcement learning based tasks offloading in vehicular edge computing networks
    Cao, Shaohua
    Liu, Di
    Dai, Congcong
    Wang, Chengqi
    Yang, Yansheng
    Zhang, Weishan
    Zheng, Danyang
    COMPUTER NETWORKS, 2023, 234
  • [42] A Neural Network based Deep Reinforcement Learning Controller for Voltage Regulation of Active Distribution Network
    Jain, Jatin
    Mohamed, Ahmed
    Rahman, Tanvir
    Ali, Mohamed
    2024 IEEE 5TH ANNUAL WORLD AI IOT CONGRESS, AIIOT 2024, 2024, : 0280 - 0285
  • [43] Deep Reinforcement Learning-Based Voltage Control to Deal with Model Uncertainties in Distribution Networks
    Toubeau, Jean-Francois
    Zad, Bashir Bakhshideh
    Hupez, Martin
    De Greve, Zacharie
    Vallee, Francois
    ENERGIES, 2020, 13 (15)
  • [44] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [45] Deep Reinforcement Learning Approach for UAV-Assisted Mobile Edge Computing Networks
    Hwang, Sangwon
    Park, Juseong
    Lee, Hoon
    Kim, Mintae
    Lee, Inkyu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3839 - 3844
  • [46] Permissioned Blockchain and Deep Reinforcement Learning for Content Caching in Vehicular Edge Computing and Networks
    Dai, Yueyue
    Xu, Du
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [47] Adaptive Digital Twin and Multiagent Deep Reinforcement Learning for Vehicular Edge Computing and Networks
    Zhang, Ke
    Cao, Jiayu
    Zhang, Yan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (02) : 1405 - 1413
  • [48] Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks
    Dai, Yueyue
    Xu, Du
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) : 4312 - 4324
  • [49] Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning
    Le Thanh Tan
    Hu, Rose Qingyang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) : 10190 - 10203
  • [50] iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks
    Chen, Jienan
    Chen, Siyu
    Wang, Qi
    Cao, Bin
    Feng, Gang
    Hu, Jianhao
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04): : 7011 - 7024