Active Reinforcement Learning for Robust Building Control

被引:0
|
作者
Jang, Doseok [1 ]
Yan, Larry [1 ]
Spangher, Lucas [1 ]
Spanos, Costas J. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20 | 2024年
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) is a powerful tool for optimal control that has found great success in Atari games, the game of Go, robotic control, and building optimization. RL is also very brittle; agents often overfit to their training environment and fail to generalize to new settings. Unsupervised environment design (UED) has been proposed as a solution to this problem, in which the agent trains in environments that have been specially selected to help it learn. Previous UED algorithms focus on trying to train an RL agent that generalizes across a large distribution of environments. This is not necessarily desirable when we wish to prioritize performance in one environment over others. In this work, we will be examining the setting of robust RL building control, where we wish to train an RL agent that prioritizes performing well in normal weather while still being robust to extreme weather conditions. We demonstrate a novel UED algorithm, ActivePLR, that uses uncertainty-aware neural network architectures to generate new training environments at the limit of the RL agent's ability while being able to prioritize performance in a desired base environment. We show that ActivePLR is able to outperform state-of-the-art UED algorithms in minimizing energy usage while maximizing occupant comfort in the setting of building control.
引用
收藏
页码:22150 / 22158
页数:9
相关论文
共 50 条
  • [1] Robust reinforcement learning control
    Kretchmar, RM
    Young, PM
    Anderson, CW
    Hittle, DC
    Anderson, ML
    Tu, J
    Delnero, CC
    PROCEEDINGS OF THE 2001 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 2001, : 902 - 907
  • [2] Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory
    Liu, Simeng
    Henze, Gregor P.
    JOURNAL OF SOLAR ENERGY ENGINEERING-TRANSACTIONS OF THE ASME, 2007, 129 (02): : 215 - 225
  • [3] Evaluation of reinforcement learning for optimal control of building active and passive thermal storage inventory
    Liu, Simeng
    Henze, Gregor P.
    SOLAR ENGINEERING 2005, 2006, : 301 - 311
  • [4] Robust Deep Reinforcement Learning for Quadcopter Control
    Deshpande, Aditya M.
    Minai, Ali A.
    Kumar, Manish
    IFAC PAPERSONLINE, 2021, 54 (20): : 90 - 95
  • [5] The driver and the engineer: Reinforcement learning and robust control
    Bernat, Natalie
    Chen, Jiexin
    Matni, Nikolai
    Doyle, John
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 3932 - 3939
  • [6] Deep Reinforcement Learning for Building HVAC Control
    Wei, Tianshu
    Wang, Yanzhi
    Zhu, Qi
    PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
  • [7] Reinforcement Learning for Control of Building HVAC Systems
    Raman, Naren Srivaths
    Devraj, Adithya M.
    Barooah, Prabir
    Meyn, Sean P.
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 2326 - 2332
  • [8] Risk-sensitive Reinforcement Learning and Robust Learning for Control
    Noorani, Erfaun
    Baras, John S.
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2976 - 2981
  • [9] Reinforcement learning building control approach harnessing imitation learning
    Dey, Sourav
    Marzullo, Thibault
    Zhang, Xiangyu
    Henze, Gregor
    ENERGY AND AI, 2023, 14
  • [10] Application of reinforcement learning for active noise control
    Hoseini Sabzevari, Seyed Amir
    Moavenian, Majid
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2017, 25 (04) : 2606 - 2613