Dynamically engineered multi-modal feature learning for predictions of office building cooling loads

被引:2
|
作者
Liu, Yiren [1 ]
Zhao, Xiangyu [1 ]
Qin, S. Joe [2 ,3 ]
机构
[1] City Univ Hong Kong, Sch Data Sci, Hong Kong, Peoples R China
[2] Lingnan Univ, Inst Data Sci, Hong Kong, Peoples R China
[3] Lingnan Univ, Dept Comp & Decis Sci, Hong Kong, Peoples R China
关键词
Feature engineering; Building energy management; Cooling load prediction; Sparse statistical learning; Automated machine learning; REGRESSION; ENERGY; MODEL;
D O I
10.1016/j.apenergy.2023.122183
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This paper reports a new knowledge-driven engineered feature learning approach in response to the Global AI Challenge for Building E&M Facilities held by the Electrical and Mechanical Service Department (EMSD) of the Hong Kong SAR. The results were awarded with a Grand Prize by the competition organizer. A dynamically engineered multi-modal feature learning (DEMMFL) method is proposed for predicting the cooling load of two office buildings. The DEMMFL model is estimated with the Lasso-ridge regression and compared with other well-known methods such as the Lasso. The novel approach applies control system knowledge to engineer useful features and explore load patterns for multi-mode modeling. Deep learning methods including LSTM, GRU, and AutoGluon are implemented for automated machine learning and tested in parallel to compare the performance of the proposed model with existing methods. The proposed model is demonstrated to predict long-term cooling load most accurately using engineered features from weather information only.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A Discriminant Information Theoretic Learning Framework for Multi-modal Feature Representation
    Gao, Lei
    Guan, Ling
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [2] The integration of information in a digital, multi-modal learning environment
    Schueler, Anne
    LEARNING AND INSTRUCTION, 2019, 59 : 76 - 87
  • [3] A Multi-Modal Deep Learning Approach for Emotion Recognition
    Shahzad, H. M.
    Bhatti, Sohail Masood
    Jaffar, Arfan
    Rashid, Muhammad
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02) : 1561 - 1570
  • [4] Popularity Prediction of Social Media based on Multi-Modal Feature Mining
    Hsu, Chih-Chung
    Kang, Li-Wei
    Lee, Chia-Yen
    Lee, Jun-Yi
    Zhang, Zhong-Xuan
    Wu, Shao-Min
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2687 - 2691
  • [5] Multi-Constraint Latent Representation Learning for Prognosis Analysis Using Multi-Modal Data
    Ning, Zhenyuan
    Lin, Zehui
    Xiao, Qing
    Du, Denghui
    Feng, Qianjin
    Chen, Wufan
    Zhang, Yu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (07) : 3737 - 3750
  • [6] Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification
    Luo, Yong
    Wen, Yonggang
    Tao, Dacheng
    Gui, Jie
    Xu, Chao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (01) : 414 - 427
  • [7] Multi-modal haptic image recognition based on deep learning
    Han, Dong
    Nie, Hong
    Chen, Jinbao
    Chen, Meng
    Deng, Zhen
    Zhang, Jianwei
    SENSOR REVIEW, 2018, 38 (04) : 486 - 493
  • [8] Multi-modal Adapter for Medical Vision-and-Language Learning
    Yu, Zheng
    Qiao, Yanyuan
    Xie, Yutong
    Wu, Qi
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 393 - 402
  • [9] Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction
    Bae, Inhwan
    Park, Jin-Hwi
    Jeon, Hae-Gon
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 270 - 289
  • [10] Automatic quantification of multi-modal rigid registration accuracy using feature detectors
    Hauler, F.
    Furtado, H.
    Jurisic, M.
    Polanec, S. H.
    Spick, C.
    Laprie, A.
    Nestle, U.
    Sabatini, U.
    Birkfellner, W.
    PHYSICS IN MEDICINE AND BIOLOGY, 2016, 61 (14) : 5198 - 5214