Energy-efficient personalized thermal comfort control in office buildings based on multi-agent deep reinforcement learning

被引:41
作者
Yu, Liang [1 ,2 ]
Xu, Zhanbo [1 ]
Zhang, Tengfei [2 ]
Guan, Xiaohong [1 ]
Yue, Dong [2 ]
机构
[1] Xi An Jiao Tong Univ, Fac Elect & Informat Engn, Xian 710049, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Automat, Coll Artificial Intelligence, Nanjing 210003, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Office buildings; HVAC systems; Personal comfort systems; Personalized thermal comfort control; Energy-efficient; Multi-agent deep reinforcement learning; CONTROL SCHEME; FRAMEWORK; SYSTEMS; MODEL; FANS;
D O I
10.1016/j.buildenv.2022.109458
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In a shared office space, the percentage of occupants with satisfied thermal comfort is typically low. The main reason is that heating, ventilation, and air conditioning (HVAC) systems cannot provide individual thermal environment for each occupant within the shared office space. Although personal comfort systems (PCSs) can be adopted to implement heterogeneous thermal environments, they have limited adjustment abilities. At this time, coordinating the operations of PCSs and an HVAC system is a good choice. In this paper, the coordination control problem of PCSs and an HVAC system in a shared office space is investigated to minimize the total energy consumption while maintaining comfortable individual thermal environment for each occupant. Specifically, we first formulate an expected energy consumption minimization problem related to PCSs and an HVAC system. Due to the existence of an inexplicit building thermal dynamics model and uncertain parameters, it is challenging to solve the problem. To overcome the challenge, we reformulate the problem as a Markov game with heterogeneous agents. To promote an efficient cooperation of such agents, we propose a real-time control algorithm based on attention-based multi-agent deep reinforcement learning, which does not require an explicit building thermal dynamics model and any prior knowledge of uncertain parameters. Simulation results based on real-world traces show that the proposed algorithm can reduce energy consumption by 0.7%-4.18% and reduce average thermal comfort deviation by 64.13%-72.08% simultaneously compared with baselines.
引用
收藏
页数:12
相关论文
共 50 条
[41]   Research Progress of Multi-Agent Deep Reinforcement Learning [J].
Ding, Shi-Feiu ;
Du, Weiu ;
Zhang, Jianu ;
Guo, Li-Liu ;
Ding, Ding .
Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (07) :1547-1567
[42]   An energy-efficient multi-agent based architecture in wireless sensor network [J].
Zhang, Yi-Ying ;
Yang, Wen-Cheng ;
Kim, Kee-Bum ;
Cui, Min-Yu ;
Xue, Ming ;
Park, Myong-Soon .
PROGRESS IN WWW RESEARCH AND DEVELOPMENT, PROCEEDINGS, 2008, 4976 :124-129
[43]   Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings [J].
Homod, Raad Z. ;
Togun, Hussein ;
Hussein, Ahmed Kadhim ;
Al-Mousawi, Fadhel Noraldeen ;
Yaseen, Zaher Mundher ;
Al-Kouz, Wael ;
Abd, Haider J. ;
Alawi, Omer A. ;
Goodarzi, Marjan ;
Hussein, Omar A. .
APPLIED ENERGY, 2022, 313
[44]   Optimization control method for dedicated outdoor air system in multi-zone office buildings based on deep reinforcement learning [J].
Tang, Xudong ;
Zhang, Ling ;
Luo, Yongqiang .
BUILDING SIMULATION, 2025, 18 (04) :881-896
[45]   Joint Energy and Carbon Trading for Multi-Microgrid System Based on Multi-Agent Deep Reinforcement Learning [J].
Zhou, Yanting ;
Ma, Zhongjing ;
Wang, Tianyu ;
Zhang, Jinhui ;
Shi, Xingyu ;
Zou, Suli .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (06) :7376-7388
[46]   Multi-Agent Deep Reinforcement Learning for Efficient Computation Offloading in Mobile Edge Computing [J].
Jiao, Tianzhe ;
Feng, Xiaoyue ;
Guo, Chaopeng ;
Wang, Dongqi ;
Song, Jie .
CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03) :3585-3603
[47]   Deep Learning Based Formation Control for the Multi-Agent Coordination [J].
Liu, Qishuai ;
Moulay, Emmanuel ;
Coirault, Patrick ;
Hui, Qing .
PROCEEDINGS OF THE 2019 IEEE 16TH INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL (ICNSC 2019), 2019, :12-17
[48]   Multi-agent hierarchical reinforcement learning for energy management [J].
Jendoubi, Imen ;
Bouffard, Francois .
APPLIED ENERGY, 2023, 332
[49]   Multi-Agent Deep Reinforcement Learning for Uplink Power Control in Multi-Cell Systems [J].
Jia, Ruibao ;
Liu, Liu ;
Zheng, Xufei ;
Yang, Yuhan ;
Wang, Shaoyang ;
Huang, Pingmu ;
Lv, Tiejun .
2022 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2022, :324-330
[50]   Distributed Drive Autonomous Vehicle Trajectory Tracking Control Based on Multi-Agent Deep Reinforcement Learning [J].
Liu, Yalei ;
Ding, Weiping ;
Yang, Mingliang ;
Zhu, Honglin ;
Liu, Liyuan ;
Jin, Tianshi .
MATHEMATICS, 2024, 12 (11)