Explainable reinforcement learning for distribution network reconfiguration

被引:3
|
作者
Gholizadeh, Nastaran [1 ]
Musilek, Petr [1 ,2 ]
机构
[1] Univ Alberta, Elect & Comp Engn, Edmonton, AB, Canada
[2] Univ Hradec Kralove, Appl Cybernet, Hradec Kralove, Czech Republic
基金
加拿大自然科学与工程研究理事会;
关键词
Distribution network reconfiguration; Reinforcement learning; Deep Q-learning; Data-driven control; Explainable machine learning; DYNAMIC RECONFIGURATION; OPERATION;
D O I
10.1016/j.egyr.2024.05.031
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The lack of transparency in reinforcement learning methods' decision-making process has resulted in a significant lack of trust towards these models, subsequently limiting their utilization in critical decisionmaking applications. The use of reinforcement learning in distribution network reconfiguration is an inherently sensitive application due to the need to change the states of the switches, which can significantly impact the lifespan of the switches. Consequently, executing this process requires meticulous and deliberate consideration. This study presents a new methodology to analyze and elucidate reinforcement learning-based decisions in distribution network reconfiguration. The proposed approach involves the training of an explainer neural network based on the decisions of the reinforcement learning agent. The explainer network receives as input the active and reactive power of the buses at each hour and outputs the line states determined by the agent. To delve deeper into the inner workings of the explainer network, attribution methods are employed. These techniques facilitate the examination of the intricate relationship between the inputs and outputs of the network, offering valuable insights into the agent's decision-making process. The efficacy of this novel approach is demonstrated through its application to both the 33- and 136 -bus test systems, and the obtained results are presented.
引用
收藏
页码:5703 / 5715
页数:13
相关论文
共 50 条
  • [31] Fault Recovery Decision of Distribution Network Based on Graph Reinforcement Learning
    Zhang P.
    Chen Y.
    Wang G.
    Li X.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2024, 48 (02): : 151 - 158
  • [32] Multi-objective Genetic Programming for Explainable Reinforcement Learning
    Videau, Mathurin
    Leite, Alessandro
    Teytaud, Olivier
    Schoenauer, Marc
    GENETIC PROGRAMMING (EUROGP 2022), 2022, : 278 - 293
  • [33] Explainable Artificial Intelligence (XAI) Approach for Reinforcement Learning Systems
    Peixoto, Maria J. P.
    Azim, Akramul
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 971 - 978
  • [34] Sample-Based Rule Extraction for Explainable Reinforcement Learning
    Engelhardt, Raphael C.
    Lange, Moritz
    Wiskott, Laurenz
    Konen, Wolfgang
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2022, PT I, 2023, 13810 : 330 - 345
  • [35] XPM: An Explainable Deep Reinforcement Learning Framework for Portfolio Management
    Shi, Si
    Li, Jianjun
    Li, Guohui
    Pan, Peng
    Liu, Ke
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1661 - 1670
  • [36] Explainable reinforcement learning (XRL): a systematic literature review and taxonomy
    Bekkemoen, Yanzhe
    MACHINE LEARNING, 2024, 113 (01) : 355 - 441
  • [37] Explainable reinforcement learning (XRL): a systematic literature review and taxonomy
    Yanzhe Bekkemoen
    Machine Learning, 2024, 113 : 355 - 441
  • [38] Resilience-based explainable reinforcement learning in chemical process
    Szatmari, Kinga
    Horvath, Gergely
    Nemeth, Sandor
    Bai, Wenshuai
    Kummer, Alex
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 191
  • [39] Reinforcement Learning Based Path Exploration for Sequential Explainable Recommendation
    Li, Yicong
    Chen, Hongxu
    Li, Yile
    Li, Lin
    Yu, Philip S.
    Xu, Guandong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (11) : 11801 - 11814
  • [40] Optimal Control of Active Distribution Network using Deep Reinforcement Learning
    Tahir, Yameena
    Khan, Muhammad Faisal Nadeem
    Sajjad, Intisar Ali
    Martirano, Luigi
    2022 IEEE INTERNATIONAL CONFERENCE ON ENVIRONMENT AND ELECTRICAL ENGINEERING AND 2022 IEEE INDUSTRIAL AND COMMERCIAL POWER SYSTEMS EUROPE (EEEIC / I&CPS EUROPE), 2022,