SAED: self-attentive energy disaggregation

被引:0
作者
Nikolaos Virtsionis-Gkalinikis
Christoforos Nalmpantis
Dimitris Vrakas
机构
[1] Aristotle University of Thessaloniki,School of Informatics
来源
Machine Learning | 2023年 / 112卷
关键词
Energy disaggregation; Non-intrusive load monitoring; Artificial neural networks; Self attention;
D O I
暂无
中图分类号
学科分类号
摘要
The field of energy disaggregation deals with the approximation of appliance electric consumption using only the aggregate consumption measurement of a mains meter. Recent research developments have used deep neural networks and outperformed previous methods based on Hidden Markov Models. On the other hand, deep learning models are computationally heavy and require huge amounts of data. The main objective of the current paper is to incorporate the attention mechanism into neural networks in order to reduce their computational complexity. For the attention mechanism two different versions are utilized, named Additive and Dot Attention. The experiments show that they perform on par, while the Dot mechanism is slightly faster. The two versions of self-attentive neural networks are compared against two state-of-the-art energy disaggregation deep learning models. The experimental results show that the proposed architecture achieves faster or equal training and inference time and with minor performance drop depending on the device or the dataset.
引用
收藏
页码:4081 / 4100
页数:19
相关论文
共 16 条
[1]  
Armel KC(2013)Is disaggregation the holy grail of energy efficiency? The case of electricity Energy Policy 52 213-234
[2]  
Gupta A(1992)Nonintrusive appliance load monitoring Proceedings of the IEEE 80 1870-1891
[3]  
Shrimali G(2015)The UK-dale dataset domestic appliance-level electricity demand and whole-house demand from five UK homes Science Data 2 150007-243
[4]  
Albert A(2019)Machine learning approaches for non-intrusive load monitoring: from qualitative to quantitative comparation Artificial Intelligence Review 52 217-1958
[5]  
Hart GW(2021)Improving non-intrusive load disaggregation through an attention-based deep neural network Energies 14 1010-undefined
[6]  
Jack K(2014)Dropout: a simple way to prevent neural networks from overfitting The Journal of Machine Learning Research 15 1929-undefined
[7]  
William K(undefined)undefined undefined undefined undefined-undefined
[8]  
Nalmpantis C(undefined)undefined undefined undefined undefined-undefined
[9]  
Vrakas D(undefined)undefined undefined undefined undefined-undefined
[10]  
Piccialli V(undefined)undefined undefined undefined undefined-undefined