Enhancing Energy Efficiency and Flexibility in Educational Buildings Through a Deep Reinforcement Learning-Based Controller for Rooftop Units

被引:0
作者
Brandi, Silvio [1 ]
Pizza, Andrea [1 ]
Buscemi, Giacomo [1 ]
Razzano, Giuseppe [1 ]
Capozzoli, Alfonso [1 ]
机构
[1] Politecn Torino, Dept Energy Galileo Ferraris, TEBE Res Grp, BAEDA Lab, Duca Abruzzi 24, I-10129 Turin, Italy
来源
MULTIPHYSICS AND MULTISCALE BUILDING PHYSICS, IBPC 2024, VOL 3 | 2025年 / 554卷
关键词
Energy Efficiency; HVAC Systems; Deep Reinforcement Learning Control; Energy Flexibility; Rooftop Units;
D O I
10.1007/978-981-97-8313-7_8
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Advanced controllers based on predictive and adaptive frameworks play a pivotal role in the optimisation of building energy management and in the exploitation of building energy flexibility. This study analyses the application of a deep reinforcement learning controller tailored for managing four RoofTop Units (RTUs) serving 4 classrooms in an educational building at Politecnico di Torino Campus, equipped with solar PV panels and a battery storage system. The controller was designed to reduce energy costs while maintaining desirable indoor temperature conditions. The integration of renewable energy sources and storage, provide a holistic approach to the management of integrated energy systems. The control agent was initially trained on a single RTU serving a classroom and successively transferred and deployed on the remaining systems. After the transfer process, the proposed controllers effectively manage each RTU and their interactions, leveraging energy generation and storage to reduce reliance on the grid. To assess the performance of the developed controller, a simulation environment combining Modelica and Python was employed. The simulation results highlight the proposed controller's ability to be effectively transferred among similar systems while achieving improvements in energy management. Compared to a traditional control strategy, the proposed solution effectively reduces operational costs while maintaining comfort standards within the building environment. The results obtained demonstrate the potential of deep reinforcement learning strategy in enhancing building energy management and underscore its effectiveness in increasing the flexibility of integrated energy systems in buildings.
引用
收藏
页码:51 / 57
页数:7
相关论文
共 39 条
[31]   Optimizing Energy Efficiency in Data Centers through Learning-Based Air Conditioner-Server Rack Mapping [J].
Zheng, Xinxing ;
Zhao, Chaoyue ;
Jia, Dan ;
Meng, Weiye .
2024 2ND INTERNATIONAL CONFERENCE ON MOBILE INTERNET, CLOUD COMPUTING AND INFORMATION SECURITY, MICCIS 2024, 2024, :81-87
[32]   Demonstrating Model-based Reinforcement Learning for Energy Efficiency and Demand Response Using Hot Water Vessels in Net-Zero Energy Buildings [J].
Kazmi, Hussain ;
D'Oca, Simona .
2016 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE EUROPE (ISGT-EUROPE), 2016,
[33]   Deep Learning-Based Network-Wide Energy Efficiency Optimization in Ultra-Dense Small Cell Networks [J].
Lee, Woongsup ;
Lee, Howon ;
Choi, Hyun-Ho .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (06) :8244-8249
[34]   Balancing Fairness and Energy Efficiency in SWIPT-Based D2D Networks: Deep Reinforcement Learning Based Approach [J].
Han, Eun-Jeong ;
Sengly, Muy ;
Lee, Jung-Ryun .
IEEE ACCESS, 2022, 10 :64495-64503
[35]   Cooperative Hierarchical Deep Reinforcement Learning-Based Joint Sleep and Power Control in RIS-Aided Energy-Efficient RAN [J].
Zhou, Hao ;
Elsayed, Medhat ;
Bavand, Majid ;
Gaigalas, Raimundas ;
Furr, Steve ;
Erol-Kantarci, Melike .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2025, 11 (01) :489-504
[36]   Energy Efficiency Maximization in RISs-Assisted UAVs-Based Edge Computing Network Using Deep Reinforcement Learning [J].
Luo, Chuanwen ;
Zhang, Jian ;
Guo, Jianxiong ;
Hong, Yi ;
Chen, Zhibo ;
Gu, Shuyang .
BIG DATA MINING AND ANALYTICS, 2024, 7 (04) :1065-1083
[37]   Real-time monitoring of occupancy activities and window opening within buildings using an integrated deep learning-based approach for reducing energy demand [J].
Tien, Paige Wenbin ;
Wei, Shuangyu ;
Calauti, John Kaiser ;
Darkwa, Jo ;
Wood, Christopher .
APPLIED ENERGY, 2022, 308
[38]   Energy Efficiency Optimization for SWIPT-Based D2D-Underlaid Cellular Networks Using Multiagent Deep Reinforcement Learning [J].
Muy, Sengly ;
Ron, Dara ;
Lee, Jung-Ryun .
IEEE SYSTEMS JOURNAL, 2022, 16 (02) :3130-3138
[39]   Enhancing Energy Efficiency in Underlay Cellular Networks: Leveraging Deep Learning for Full-Duplex SWIPT-based D2D Communications [J].
Chhea, Kimchheang ;
Meng, Sothearath ;
Lee, Jung-Ryun .
38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, :138-143