Explainable Deep Reinforcement Learning for Multi-Agent Electricity Market Simulations

被引:1
|
作者
Miskiw, Kim K. [1 ]
Staudt, Philipp [2 ]
机构
[1] Karlsruhe Inst Technol, Informat & Market Engn, Karlsruhe, Germany
[2] Carl von Ossietzky Univ Oldenburg, Environm & Sustainable Informat Syst, Oldenburg, Germany
来源
2024 20TH INTERNATIONAL CONFERENCE ON THE EUROPEAN ENERGY MARKET, EEM 2024 | 2024年
关键词
Agent-based simulation; electricity markets; multi-agent deep reinforcement learning; explainable reinforcement learning;
D O I
10.1109/EEM60825.2024.10608907
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
As electricity systems evolve in the light of increased volatility and market variety, understanding market dynamics through simulations becomes crucial. Deep reinforcement learning (DRL) in combination with agent-based models (ABM) progressively garners attention as it allows the modeling of strategic bidding behavior of electricity market participants. However, as DRL is a black-box model, the learned behavior of market participants is hardly explainable nor interpretable for modelers. We bridge the gap in explainability of DRL in agent-based electricity market simulations by leveraging explainable DRL methods. The reviewed literature underscores the novelty of this approach, especially in multi-agent DRL settings. A case study comparing DRL and rule-based bidding strategies within the German electricity market showcases our method's potential. By analyzing DRL bidding strategies of 118 competitive DRL agents with clustering approaches and DeepSHAP, we investigate the underlying factors driving agent decisions, contributing to the development of transparent ABMs.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] PowerNet: Multi-Agent Deep Reinforcement Learning for Scalable Powergrid Control
    Chen, Dong
    Chen, Kaian
    Li, Zhaojian
    Chu, Tianshu
    Yao, Rui
    Qiu, Feng
    Lin, Kaixiang
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2022, 37 (02) : 1007 - 1017
  • [22] Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning
    Wang, Yizheng
    Shi, Enhao
    Xu, Yang
    Hu, Jiahua
    Feng, Changsen
    ENERGIES, 2024, 17 (21)
  • [23] Multi-Agent Deep Reinforcement Learning for Resource Allocation in the Multi-Objective HetNet
    Nie, Hongrui
    Li, Shaosheng
    Liu, Yong
    IWCMC 2021: 2021 17TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2021, : 116 - 121
  • [24] Electricity auction market simulation with multi-agent model
    Zou, B
    Li, QH
    Ding, F
    DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE SYSTEMS-SERIES A-MATHEMATICAL ANALYSIS, 2006, 13 : 1436 - 1445
  • [25] Multi-agent communication cooperation based on deep reinforcement learning and information theory
    Gao, Bing
    Zhang, Zhejie
    Zou, Qijie
    Liu, Zhiguo
    Zhao, Xiling
    Hangkong Xuebao/Acta Aeronautica et Astronautica Sinica, 2024, 45 (18):
  • [26] UAV-Enabled Secure Communications by Multi-Agent Deep Reinforcement Learning
    Zhang, Yu
    Mou, Zhiyu
    Gao, Feifei
    Jiang, Jing
    Ding, Ruijin
    Han, Zhu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (10) : 11599 - 11611
  • [27] Multi-Period and Multi-Spatial Equilibrium Analysis in Imperfect Electricity Markets: A Novel Multi-Agent Deep Reinforcement Learning Approach
    Ye, Yujian
    Qiu, Dawei
    Li, Jing
    Strbac, Goran
    IEEE ACCESS, 2019, 7 : 130515 - 130529
  • [28] Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise
    Motokawa, Yoshinari
    Sugawara, Toshiharu
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [29] Multi-Agent Deep Reinforcement Learning for content caching within the Internet of Vehicles
    Knari, Anas
    Derfouf, Mostapha
    Koulali, Mohammed-Amine
    Khoumsi, Ahmed
    AD HOC NETWORKS, 2024, 152
  • [30] Multi-agent deep reinforcement learning for computation offloading in cooperative edge network
    Wu, Pengju
    Guan, Yepeng
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2024, : 567 - 591