Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings*

被引:45
作者
Xie, Jiahan [1 ]
Ajagekar, Akshay [2 ]
You, Fengqi [2 ,3 ]
机构
[1] Cornell Univ, Dept Comp Sci, Ithaca, NY 14853 USA
[2] Cornell Univ, Syst Engn, Ithaca, NY 14853 USA
[3] Cornell Univ, Robert Frederick Smith Sch Chem & Biomol Engn, Ithaca, NY 14853 USA
关键词
Demand response; Deep reinforcement learning; Multi; -agent; Buildings; MANAGEMENT; ALGORITHM;
D O I
10.1016/j.apenergy.2023.121162
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Integrating renewable energy resources and deploying energy management devices offer great opportunities to develop autonomous energy management systems in grid-responsive buildings. Demand response can promote enhancing demand flexibility and energy efficiency while reducing consumer costs. In this work, we propose a novel multi-agent deep reinforcement learning (MADRL) based approach with an agent assigned to individual buildings to facilitate demand response programs with diverse loads, including space heating/cooling and electrical equipment. Achieving real-time autonomous demand response in networks of buildings is challenging due to uncertain system parameters, the dynamic market price, and complex coupled operational constraints. To develop a scalable approach for automated demand response in networks of interconnected buildings, coordination between buildings is necessary to ensure demand flexibility and the grid's stability. We propose a MADRL technique that utilizes an actor-critic algorithm incorporating shared attention mechanism to enable effective and scalable real-time coordinated demand response in grid-responsive buildings. The presented case studies demonstrate the ability of the proposed approach to obtain decentralized cooperative policies for electricity costs minimization and efficient load shaping without knowledge of building energy systems. The viability of the proposed control approach is also demonstrated by a reduction of over 6% net load demand compared to standard reinforcement learning approaches, deep deterministic policy gradient, and soft actor-critic algorithm, as well as a tailored MADRL approach for demand response.
引用
收藏
页数:14
相关论文
共 55 条
[1]   Multiagent Reinforcement Learning for Energy Management in Residential Buildings [J].
Ahrarinouri, Mehdi ;
Rastegar, Mohammad ;
Seifi, Ali Reza .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (01) :659-666
[2]   Deep Reinforcement Learning Based Unit Commitment Scheduling under Load and Wind Power Uncertainty [J].
Ajagekar, Akshay ;
You, Fengqi .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2023, 14 (02) :803-812
[3]   Energy-efficient AI-based Control of Semi-closed Greenhouses Leveraging Robust Optimization in Deep Reinforcement Learning [J].
Ajagekar, Akshay ;
Mattson, Neil S. ;
You, Fengqi .
ADVANCES IN APPLIED ENERGY, 2023, 9
[4]   MARLA-SG: Multi-Agent Reinforcement Learning Algorithm for Efficient Demand Response in Smart Grid [J].
Aladdin, Sally ;
El-Tantawy, Samah ;
Fouda, Mostafa M. ;
Tag Eldien, Adly S. .
IEEE ACCESS, 2020, 8 :210626-210639
[5]  
[Anonymous], 2015, QUADRENNIAL TECHNOLO, P12
[6]  
[Anonymous], 2005, Report to Congress on P.L. 110-85
[7]   A comprehensive and modular set of appliance operation MILP models for demand response optimization [J].
Antunes, Carlos Henggeler ;
Alves, Maria Joao ;
Soares, Ines .
APPLIED ENERGY, 2022, 320
[8]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[9]   Reinforcement learning for whole-building HVAC control and demand response [J].
Azuatalam, Donald ;
Lee, Wee-Lih ;
de Nijs, Frits ;
Liebman, Ariel .
ENERGY AND AI, 2020, 2
[10]   Model-predictive control and reinforcement learning in multi-energy system case studies [J].
Ceusters, Glenn ;
Rodriguez, Roman Cantu ;
Garcia, Alberte Bouso ;
Franke, Rudiger ;
Deconinck, Geert ;
Helsen, Lieve ;
Nowe, Ann ;
Messagie, Maarten ;
Camargo, Luis Ramirez .
APPLIED ENERGY, 2021, 303