Federated reinforcement learning for sustainable and cost-efficient energy management

被引:0
作者
Sievers, J. [1 ]
Henrich, P. [1 ]
Beichter, M. [2 ]
Mikut, R. [2 ]
Hagenmeyer, V. [2 ]
Blank, T. [1 ]
Simon, F. [1 ]
机构
[1] Karlsruhe Inst Technol KIT, Inst Data Proc & Elect IPE, Hermann von Helmholtz Pl 1, D-76344 Eggenstein Leopoldshafen, Germany
[2] Karlsruhe Inst Technol KIT, Inst Automat & Appl Informat IAI, Hermann von Helmholtz Pl 1, D-76344 Eggenstein Leopoldshafen, Germany
关键词
Reinforcement learning; Federated learning; Energy management; Smart grid; STORAGE;
D O I
10.1016/j.egyai.2025.100521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Integrating renewable energy sources into the electricity grid introduces volatility and complexity, requiring advanced energy management systems. By optimizing the charging and discharging behavior of a building's battery system, reinforcement learning effectively provides flexibility, managing volatile energy demand, dynamic pricing, and photovoltaic output to maximize rewards. However, the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns, unstable training processes, and challenges in generalizing to different household conditions. In this study, we propose a novel federated framework for reinforcement learning in energy management systems. By enabling local model training on private data and aggregating only model parameters on a global server, this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions, while decreasing electricity costs and emissions per building. For a comprehensive benchmark, we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems. Among the reinforcement learning methods, deep deterministic policy gradient performed best on the Ausgrid dataset, with federated learning reducing costs by 5.01 % and emissions by 4.60 %. Federated learning also improved zero-shot performance for unseen buildings, reducing costs by 5.11 % and emissions by 5.55 %. Thus, our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy, sustainability, and efficiency.
引用
收藏
页数:17
相关论文
共 71 条
[1]   Time-series clustering - A decade review [J].
Aghabozorgi, Saeed ;
Shirkhorshidi, Ali Seyed ;
Teh Ying Wah .
INFORMATION SYSTEMS, 2015, 53 :16-38
[2]   Microgrid energy management using deep Q-network reinforcement learning [J].
Alabdullah, Mohammed H. ;
Abido, Mohammad A. .
ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) :9069-9078
[3]  
Ausgrid, 2018, Ausgrid electricity consumption data
[4]  
Australian Energy Market Operator (AEMO), 2024, Aggregated price and demand data
[5]   A MARKOVIAN DECISION PROCESS [J].
BELLMAN, R .
JOURNAL OF MATHEMATICS AND MECHANICS, 1957, 6 (05) :679-684
[6]  
Berndt D.J., 1994, P AAAI WORKSH KNOWL, P359, DOI [10.5555/3000850.3000887, DOI 10.5555/3000850.3000887]
[7]  
Bollenbacher J, 2017, INT ENERG SUSTAIN
[8]  
Brendan McMahan H., 2016, arXiv, DOI DOI 10.48550/ARXIV.1602.05629
[9]   How much electrical energy storage do we need? A synthesis for the US, Europe, and Germany [J].
Cebulla, Felix ;
Haas, Jannik ;
Eichman, Josh ;
Nowak, Wolfgang ;
Mancarella, Pierluigi .
JOURNAL OF CLEANER PRODUCTION, 2018, 181 :449-459
[10]  
Charbonnier F, 2022, arXiv, DOI [10.1016/j.apenergy.2022.118825, DOI 10.1016/J.APENERGY.2022.118825]