A Hybrid Linear Programming-Reinforcement Learning Method for Optimal Energy Hub Management

被引:6
|
作者
Ghadertootoonchi, Alireza [1 ]
Moeini-Aghtaie, Moein [1 ]
Davoudi, Mehdi [1 ]
机构
[1] Sharif Univ Technol, Dept Energy Engn, Tehran 1474949465, Iran
关键词
Energy management; energy hub; energy storage; optimal scheduling; reinforcement learning; DEMAND RESPONSE; OPTIMIZATION; REANALYSIS; SYSTEMS;
D O I
10.1109/TSG.2022.3197458
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Reinforcement learning (RL) is a subset of artificial intelligence in which a decision-making agent tries to act optimally in an environment by controlling different parameters. There is no need to identify and mathematically formulate the environmental constraints in such a method. Moreover, the RL agent does not need prior information about future outcomes to act optimally in the current situation. However, its performance is adversely affected by the environmental complexity, which increases the agent's effort to choose the optimal action in a particular condition. Integrating RL and linear programming (LP) methods is beneficial to tackle this problem as it reduces the state-action space that the agent should learn. In this regard, the optimization variables are divided into two categories. First, experience-dependent variables which have an inter-time dependency, and their values depend on the agent's decision. Second, experience-independent variables whose values depend on the LP model and have no inter-time connection. The numerical results of integrating mentioned methods have demonstrated the hybrid model's effectiveness in converging to the global optimum with more than 95% accuracy.
引用
收藏
页码:157 / 166
页数:10
相关论文
共 50 条
  • [31] Optimal household energy management based on smart residential energy hub considering uncertain behaviors
    Lu, Qing
    Lu, Shuaikang
    Leng, Yajun
    Zhang, Zhixin
    ENERGY, 2020, 195
  • [32] Novel Architecture of Energy Management Systems Based on Deep Reinforcement Learning in Microgrid
    Lee, Seongwoo
    Seon, Joonho
    Sun, Young Ghyu
    Kim, Soo Hyun
    Kyeong, Chanuk
    Kim, Dong In
    Kim, Jin Young
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (02) : 1646 - 1658
  • [33] Comparative Analysis of Control Strategies for Microgrid Energy Management with a Focus on Reinforcement Learning
    Mohammadi, Parisa
    Darshi, Razieh
    Shamaghdari, Saeed
    Siano, Pierluigi
    IEEE ACCESS, 2024, 12 : 171368 - 171395
  • [34] Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid
    Foruzan, Elham
    Soh, Leen-Kiat
    Asgarpoor, Sohrab
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2018, 33 (05) : 5749 - 5758
  • [35] Reinforcement learning for microgrid energy management
    Kuznetsova, Elizaveta
    Li, Yan-Fu
    Ruiz, Carlos
    Zio, Enrico
    Ault, Graham
    Bell, Keith
    ENERGY, 2013, 59 : 133 - 146
  • [36] Stochastic Linear Quadratic Optimal Control Problem: A Reinforcement Learning Method
    Li, Na
    Li, Xun
    Peng, Jing
    Xu, Zuo Quan
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (09) : 5009 - 5016
  • [37] Integrated Velocity Optimization and Energy Management Strategy for Hybrid Electric Vehicle Platoon: A Multiagent Reinforcement Learning Approach
    Zhang, Hailong
    Peng, Jiankun
    Dong, Hanxuan
    Ding, Fan
    Tan, Huachun
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (02): : 2547 - 2561
  • [38] Generalized modeling and optimal management of energy hub based electricity, heat and cooling demands
    Salehimaleh, Mohammad
    Akbarimajd, Adel
    Valipour, Khalil
    Dejamkhooy, Abdolmajid
    ENERGY, 2018, 159 : 669 - 685
  • [39] Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning
    Du, Guodong
    Zou, Yuan
    Zhang, Xudong
    Kong, Zehui
    Wu, Jinlong
    He, Dingbo
    APPLIED ENERGY, 2019, 251
  • [40] Optimization of Energy Management Algorithm for Hybrid Power Systems Based on Deep Reinforcement Learning
    Ban, Lan
    STUDIES IN INFORMATICS AND CONTROL, 2024, 33 (02): : 15 - 25