Optimal Scheduled Control Operation of Battery Energy Storage System using Model-Free Reinforcement Learning

被引:1
作者
Selim, Alaa [1 ]
机构
[1] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT, Australia
来源
2022 IEEE SUSTAINABLE POWER AND ENERGY CONFERENCE (ISPEC) | 2022年
关键词
Battery energy storage system; Control optimization; State of charge; Power sharing; Model-free; Reinforcement learning;
D O I
10.1109/iSPEC54162.2022.10033035
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Driven by the tremendous increase in rooftop solar panels and battery installations in Australian states, several studies have been conducted to efficiently manage battery operations with the imported grid power through battery energy storage systems (BESS). Therefore, it is crucial for the BESS to carefully decide the power set-points of the installed batteries to maintain user comfort while operating household appliances. Additionally, BESS should be capable of reducing the electricity bills by optimally managing the battery operation to sustain times of higher tariff prices for the imported grid power. This paper formulates the scheduled operation of the BESS as a Markov decision Process (MDP) that enables the BESS to figure out numerous scenarios and decides the optimal power set-points for both batteries and grid power. A model-free reinforcement learning approach is proposed to manage the batteries' powersharing and grid operation set-points to solve this MDP problem. This approach utilizes the advantages of the Deep Deterministic Policy Gradient (DDPG) algorithm to decide the shared power set-points every 5 minutes interval for the day ahead operation of the BESS. Finally, the proposed model is trained and validated using historical data of the Australian National Electricity market to offer an optimal scheduled control pattern for the daily BESS operations.
引用
收藏
页数:5
相关论文
共 14 条
  • [1] Adaptive Stochastic Control for the Smart Grid
    Anderson, Roger N.
    Boulanger, Albert
    Powell, Warren B.
    Scott, Warren
    [J]. PROCEEDINGS OF THE IEEE, 2011, 99 (06) : 1098 - 1115
  • [2] Anwar M., 2022, 2022 IEEE POWER ENER, P1
  • [3] An application of reinforcement learning to residential energy storage under real-time pricing
    Brock, Eli
    Bruckstein, Lauren
    Connor, Patrick
    Nguyen, Sabrina
    Kerestes, Robert
    Abdelhakim, Mai
    [J]. 2021 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES - ASIA (ISGT ASIA), 2021,
  • [4] Brockman G, 2016, Arxiv, DOI [arXiv:1606.01540, DOI 10.48550/ARXIV.1606.01540]
  • [5] Double Deep Q-Learning-Based Distributed Operation of Battery Energy Storage System Considering Uncertainties
    Bui, Van-Hai
    Hussain, Akhtar
    Kim, Hak-Man
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (01) : 457 - 469
  • [6] Caron S, 2010, INT CONF SMART GRID, P391, DOI 10.1109/SMARTGRID.2010.5622073
  • [7] The relation between reinforcement learning parameters and the influence of reinforcement history on choice behavior
    Katahira, Kentaro
    [J]. JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2015, 66 : 59 - 69
  • [8] A flexible control strategy of plug-in electric vehicles operating in seven modes for smoothing load power curves in smart grid
    Khemakhem, Siwar
    Rekik, Mouna
    Krichen, Lotfi
    [J]. ENERGY, 2017, 118 : 197 - 208
  • [9] Lillicrap T. P., 2015, arXiv
  • [10] On-Line Building Energy Optimization Using Deep Reinforcement Learning
    Mocanu, Elena
    Mocanu, Decebal Constantin
    Nguyen, Phuong H.
    Liotta, Antonio
    Webber, Michael E.
    Gibescu, Madeleine
    Slootweg, J. G.
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (04) : 3698 - 3708