Development of a Soft Actor Critic deep reinforcement learning approach for harnessing energy flexibility in a Large Office building

被引:31
作者
Kathirgamanathan, Anjukan [1 ,2 ,4 ]
Mangina, Eleni [2 ,3 ]
Finn, Donal P. [1 ,2 ]
机构
[1] Univ Coll Dublin, Sch Mech & Mat Engn, Dublin, Ireland
[2] Univ Coll Dublin, UCD Energy Inst, OBrien Ctr Sci, Dublin, Ireland
[3] Univ Coll Dublin, Sch Comp Sci, Dublin, Ireland
[4] Univ Coll Dublin, Energy Inst, Dublin 4, Ireland
基金
爱尔兰科学基金会;
关键词
Deep Reinforcement Learning (DRL); Building energy flexibility; Soft Actor Critic (SAC); Machine learning; Smart grid; DEMAND RESPONSE; MANAGEMENT; SYSTEM;
D O I
10.1016/j.egyai.2021.100101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This research is concerned with the novel application and investigation of 'Soft Actor Critic' based deep reinforcement learning to control the cooling setpoint (and hence cooling loads) of a large commercial building to harness energy flexibility. The research is motivated by the challenge associated with the development and application of conventional model-based control approaches at scale to the wider building stock. Soft Actor Critic is a model-free deep reinforcement learning technique that is able to handle continuous action spaces and which has seen limited application to real-life or high-fidelity simulation implementations in the context of automated and intelligent control of building energy systems. Such control techniques are seen as one possible solution to supporting the operation of a smart, sustainable and future electrical grid. This research tests the suitability of the technique through training and deployment of the agent on an EnergyPlus based environment of the office building. The agent was found to learn an optimal control policy that was able to minimise energy costs by 9.7% compared to the default rule-based control scheme and was able to improve or maintain thermal comfort limits over a test period of one week. The algorithm was shown to be robust to the different hyperparameters and this optimal control policy was learnt through the use of a minimal state space consisting of readily available variables. The robustness of the algorithm was tested through investigation of the speed of learning and ability to deploy to different seasons and climates. It was found that the agent requires minimal training sample points and outperforms the baseline after three months of operation and also without disruption to thermal comfort during this period. The agent is transferable to other climates and seasons although further retraining or hyperparameter tuning is recommended.
引用
收藏
页数:14
相关论文
共 41 条
  • [1] Demand side flexibility coordination in office buildings: A framework and case study application
    Aduda, K. O.
    Labeodan, T.
    Zeiler, W.
    Boxem, G.
    [J]. SUSTAINABLE CITIES AND SOCIETY, 2017, 29 : 139 - 158
  • [2] Towards Real-Time Reinforcement Learning Control of a Wave Energy Converter
    Anderlini, Enrico
    Husain, Salman
    Parker, Gordon G.
    Abusara, Mohammad
    Thomas, Giles
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2020, 8 (11) : 1 - 16
  • [3] Reinforcement learning for whole-building HVAC control and demand response
    Azuatalam, Donald
    Lee, Wee-Lih
    de Nijs, Frits
    Liebman, Ariel
    [J]. ENERGY AND AI, 2020, 2
  • [4] Deep reinforcement learning to optimise indoor temperature control and heating energy consumption in buildings
    Brandi, Silvio
    Piscitelli, Marco Savino
    Martellacci, Marco
    Capozzoli, Alfonso
    [J]. ENERGY AND BUILDINGS, 2020, 224
  • [5] Cochran J., 2014, FLEXIBILITY 21 CENTU
  • [6] Experimental analysis of data-driven control for a building heating system
    Costanzo, G. T.
    Iacovella, S.
    Ruelens, F.
    Leurs, T.
    Claessens, B. J.
    [J]. SUSTAINABLE ENERGY GRIDS & NETWORKS, 2016, 6 : 81 - 90
  • [7] A Survey on Demand Response in Smart Grids: Mathematical Models and Approaches
    Deng, Ruilong
    Yang, Zaiyue
    Chow, Mo-Yuen
    Chen, Jiming
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2015, 11 (03) : 570 - 582
  • [8] Deru M., 2011, US DEP ENERGY COMMER
  • [9] Dhamankar Gauraang, 2020, RLEM'20: Proceedings of the 1st International Workshop on Reinforcement Learning for Energy Management in Buildings & Cities, P15, DOI 10.1145/3427773.3427870
  • [10] Economidou M, 2011, Technical Report, P130