Self-learning Control for Active Network Management

被引:1
作者
Perez-Olvera, Julio [1 ]
Green, Tim C. [1 ]
Junyent-Ferre, Adria [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, London, England
来源
2021 IEEE MADRID POWERTECH | 2021年
关键词
Active network management; deep learning; distribution networks; optimal power flow; reinforcement learning;
D O I
10.1109/PowerTech46648.2021.9494928
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Active network management (ANM) using power electronic devices will become an essential tool for distribution network operators to deal with the variability of a large number of low-carbon technologies. To enable ANM, this paper proposes a control scheme based on deep reinforcement learning, as an alternative to traditional optimisation. The algorithm uses only a small number of network measurements and can learn approximations of optimal control actions, identified in offline simulations, via a neural network. Once trained, the control scheme chooses power converter set-points that can, for instance, even out loadings on different substations in real-time without the computational burden of high-level optimisation. The performance of the proposed control algorithm is validated against the optimal power flow (OPF) using data from real low-voltage networks. The results show that the solution and benefits are comparable to those obtained by the OPF.
引用
收藏
页数:6
相关论文
共 18 条
  • [1] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [2] On the Existence and Linear Approximation of the Power Flow Solution in Power Distribution Networks
    Bolognani, Saverio
    Zampieri, Sandro
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2016, 31 (01) : 163 - 172
  • [3] Bottrell Nathaniel, 2017, CIRED - Open Access Proceedings Journal, V2017, P1567, DOI 10.1049/oap-cired.2017.0648
  • [4] Reinforcement learning for control: Performance, stability, and deep approximators
    Busoniu, Lucian
    de Bruin, Tim
    Tolic, Domagoj
    Kober, Jens
    Palunko, Ivana
    [J]. ANNUAL REVIEWS IN CONTROL, 2018, 46 : 8 - 28
  • [5] Electric Vehicle Charging on Residential Distribution Systems: Impacts and Mitigations
    Dubey, Anamika
    Santoso, Surya
    [J]. IEEE ACCESS, 2015, 3 : 1871 - 1893
  • [6] Dugan RC, 2011, IEEE POW ENER SOC GE
  • [7] Glorot X., 2010, Proceedings of the thirteenth international conference on artificial intelligence and statistics, P249, DOI DOI 10.1109/LGRS.2016.2565705
  • [8] Kingma DP, 2015, C TRACK P
  • [9] Deep learning
    LeCun, Yann
    Bengio, Yoshua
    Hinton, Geoffrey
    [J]. NATURE, 2015, 521 (7553) : 436 - 444
  • [10] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533