Deep reinforcement learning based voltage regulation in edge computing paradigm for PV rich distribution networks

被引:0
|
作者
Li, Chang [1 ]
Li, Yong [1 ,2 ]
Liu, Jiayan [1 ]
Kleemann, Michael [3 ]
Xie, Liwei [2 ,5 ]
Peng, Jing [2 ,6 ]
Xu, Jie [2 ,6 ]
Wang, Can [4 ]
Cao, Yijia [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] Key Lab Satellite Nav Technol, Changsha 410073, Peoples R China
[3] Katholieke Univ Leuven, Dept ESAT, B-9000 Ghent, Belgium
[4] Dept Elect Engn State Grid Hunan Elect Power Co Lt, Dept Elect Engn, Changsha 410007, Peoples R China
[5] Changsha Univ Sci & Technol, Coll Elect & Informat Engn, Changsha 410114, Peoples R China
[6] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
关键词
Volt-VAR control; Deep reinforcement learning; Distribution network; Edge computing; PENETRATION; INVERTER;
D O I
10.1016/j.epsr.2024.111159
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As the penetration of renewable energy in distribution networks continues to rise, the reverse power flow caused by peak outputs from photovoltaic (PV) generation is increasingly leading to voltage limit violations. These issues require mitigation through effective voltage regulation strategies. Previous studies have typically modeled the coordinated operation of PV inverters as optimization problems or neural network computation problems, but they often overlooked the consideration of the overall computing system's performance. In this study, we propose a low-power edge computing voltage regulation framework by integrating a gated control policy network, value evaluation function, and a gated Markov decision process. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed onto the next time step at each time interval. The update gate and reset gate are used to control the hidden state of information flow in the distribution network, ensuring that it is updated and passed on to the next time step at each interval. Compared to traditional DDPG, this architecture demonstrates superior performance in both training convergence and voltage regulation. This edge computing voltage regulation framework, when tested on an IEEE distribution feeder with real historical data, has been shown to use less than 20% of the total resources of the Transformer Terminal Unit (TTU), making it feasible to deploy on edge devices with limited computational capacity.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] DeepEdge: A Deep Reinforcement Learning Based Task Orchestrator for Edge Computing
    Yamansavascilar, Baris
    Baktir, Ahmet Cihat
    Sonmez, Cagatay
    Ozgovde, Atay
    Ersoy, Cem
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (01): : 538 - 552
  • [32] Distributed Edge Computing Offloading Algorithm Based on Deep Reinforcement Learning
    Li, Yunzhao
    Qi, Feng
    Wang, Zhili
    Yu, Xiuming
    Shao, Sujie
    IEEE ACCESS, 2020, 8 : 85204 - 85215
  • [33] Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing
    Zheng, Tao
    Wan, Jian
    Zhang, Jilin
    Jiang, Congfeng
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2022, 11 (01):
  • [34] Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing
    Tao Zheng
    Jian Wan
    Jilin Zhang
    Congfeng Jiang
    Journal of Cloud Computing, 11
  • [35] Interpretable Deep Reinforcement Learning for Voltage Regulation in Active Distribution Networks with High-penetrated Renewables
    Jiang, Xin
    Yu, Liang
    Zhang, Tingjun
    Yue, Dong
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 2543 - 2548
  • [36] Deep Reinforcement Learning-based Task Offloading in Satellite-Terrestrial Edge Computing Networks
    Zhu, Dali
    Liu, Haitao
    Li, Ting
    Sun, Jiyan
    Liang, Jie
    Zhang, Hangsheng
    Geng, Liru
    Liu, Yudong
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [37] Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks
    Peng, Haixia
    Shen, Xuemin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (04): : 2416 - 2428
  • [38] RADEAN: A Resource Allocation Model Based on Deep Reinforcement Learning and Generative Adversarial Networks in Edge Computing
    Yu, Zhaoyang
    Zhao, Sinong
    Su, Tongtong
    Liu, Wenwen
    Liu, Xiaoguang
    Wang, Gang
    Wang, Zehua
    Leung, Victor C. M.
    MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES, MOBIQUITOUS 2023, PT I, 2024, 593 : 257 - 277
  • [39] Dependency-aware task offloading based on deep reinforcement learning in mobile edge computing networks
    Li, Junnan
    Yang, Zhengyi
    Chen, Kai
    Ming, Zhao
    Li, Xiuhua
    Fan, Qilin
    Hao, Jinlong
    Cheng, Luxi
    WIRELESS NETWORKS, 2024, 30 (06) : 5519 - 5531
  • [40] Mobile edge computing task distribution and offloading algorithm based on deep reinforcement learning in internet of vehicles
    Wang, Jianxi
    Wang, Liutao
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021,