Smart Grid for Industry Using Multi-Agent Reinforcement Learning

被引:36
作者
Roesch, Martin [1 ]
Linder, Christian [1 ]
Zimmermann, Roland [1 ]
Rudolf, Andreas [1 ]
Hohmann, Andrea [1 ]
Reinhart, Gunther [1 ]
机构
[1] Fraunhofer Res Inst Casting Composite & Proc Tech, D-86159 Augsburg, Germany
来源
APPLIED SCIENCES-BASEL | 2020年 / 10卷 / 19期
关键词
smart grid; multi-agent reinforcement learning; energy flexibility; production control; ENERGY; INTEGRATION; GENERATION; REDUCTION; SYSTEMS; SHOP;
D O I
10.3390/app10196900
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The growing share of renewable power generation leads to increasingly fluctuating and generally rising electricity prices. This is a challenge for industrial companies. However, electricity expenses can be reduced by adapting the energy demand of production processes to the volatile prices on the markets. This approach depicts the new paradigm of energy flexibility to reduce electricity costs. At the same time, using electricity self-generation further offers possibilities for decreasing energy costs. In addition, energy flexibility can be gradually increased by on-site power storage, e.g., stationary batteries. As a consequence, both the electricity demand of the manufacturing system and the supply side, including battery storage, self-generation, and the energy market, need to be controlled in a holistic manner, thus resulting in a smart grid solution for industrial sites. This coordination represents a complex optimization problem, which additionally is highly stochastic due to unforeseen events like machine breakdowns, changing prices, or changing energy availability. This paper presents an approach to controlling a complex system of production resources, battery storage, electricity self-supply, and short-term market trading using multi-agent reinforcement learning (MARL). The results of a case study demonstrate that the developed system can outperform the rule-based reactive control strategy (RCS) frequently used. Although the metaheuristic benchmark based on simulated annealing performs better, MARL enables faster reactions because of the significantly lower computation costs for its own execution.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 51 条
[21]  
Lodding H., 2013, HDB MANUFACTURING CO
[22]  
Loon K.W., ARXIV191212482
[23]  
Mbuwir B.V., 2018, P 2018 IEEE INT C CO, P1
[24]  
Neugebauer R., 2012, P 19 CIRP C LIF CYCL, P399
[25]   A survey of dynamic scheduling in manufacturing systems [J].
Ouelhadj, Djamila ;
Petrovic, Sanja .
JOURNAL OF SCHEDULING, 2009, 12 (04) :417-431
[26]  
Prognos A.G., 2014, POTENZIAL UND KOSTEN
[27]  
Reinhart G, 2012, WT ONLINE, V102, P622
[28]  
Roesch M, 2018, IN C IND ENG ENG MAN, P158, DOI 10.1109/IEEM.2018.8607305
[29]   Industrial load management using multi-agent reinforcement learning for rescheduling [J].
Roesch, Martin ;
Linder, Christian ;
Bruckdorfer, Christian ;
Hohmann, Andrea ;
Reinhart, Gunther .
2019 SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE FOR INDUSTRIES (AI4I 2019), 2019, :99-102
[30]   Multimodal Physics-Based Aging Model for Life Prediction of Li-Ion Batteries [J].
Safari, M. ;
Morcrette, M. ;
Teyssot, A. ;
Delacourt, C. .
JOURNAL OF THE ELECTROCHEMICAL SOCIETY, 2009, 156 (03) :A145-A153