Source-Free Domain Adaptation With Self-Supervised Learning for Nonintrusive Load Monitoring

被引:1
|
作者
Zhong, Feichi [1 ]
Shan, Zihan [1 ]
Si, Gangquan [1 ]
Liu, Aoming [2 ]
Zhao, Gerui [1 ]
Li, Bo [1 ]
机构
[1] Xi An Jiao Tong Univ, Res Ctr Informat Fus & Intelligent Control, Sch Elect Engn, Xian 710115, Peoples R China
[2] Boston Univ, Dept Comp Sci, Boston, MA 02215 USA
关键词
Adaptation models; Transfer learning; Feature extraction; Training; Load monitoring; Data models; Aggregates; Self-supervised learning; Load modeling; Hidden Markov models; Deep learning (DL); nonintrusive load monitoring (NILM); self-supervised learning; source-free domain adaptation (SFDA); NEURAL-NETWORKS; DISAGGREGATION;
D O I
10.1109/TIM.2024.3480230
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nonintrusive load monitoring (NILM) benefits the planning of energy consumption and time-of-use pricing through disaggregating appliance-level electrical information. However, its widespread adoption and rapid application face significant restrictions and challenges. Variations in energy consumption backgrounds, like user habits and appliance brands, result in substantial distribution disparities in load data, which significantly deteriorate the performance of trained models when applied to new scenarios. Moreover, concerns regarding user privacy and costs further impede the collection of load data when transfer training for adaptability is necessary. To address these issues, we propose a source-free domain adaptation (SFDA) method for NILM to enhance the generalization performance under conditions of severely limited data acquisition. We design a self-supervised subnetwork based on a sequence masking-restoration task to learn domain-invariant features of appliances without the utilization of source-domain dataset and target-domain label data. Furthermore, the entropy minimization and representation subspace distance (RSD) are introduced to align the feature spaces of different domains and mitigate the feature scaling effect on model performance. The cross-house and a cross-dataset adaptation experiment are conducted on four publicly available datasets. The proposed method achieves an average 6.6% improvement in MAE and 7.1% in F1-score over the baseline and performs well compared to other state-of-the-art models using additional training data, which proves the great potential of the proposed method to enhance the generalization with data restrictions.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] SELF-SUPERVISED LEARNING BASED DOMAIN ADAPTATION FOR ROBUST SPEAKER VERIFICATION
    Chen, Zhengyang
    Wang, Shuai
    Qian, Yanmin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5834 - 5838
  • [22] Distribution regularized self-supervised learning for domain adaptation of semantic segmentation
    Iqbal, Javed
    Rawal, Hamza
    Ha, Rehan
    Chi, Yu-Tseh
    Ali, Mohsen
    IMAGE AND VISION COMPUTING, 2022, 124
  • [23] Self-Supervised Adversarial Learning for Domain Adaptation of Pavement Distress Classification
    Wu, Yanwen
    Hong, Mingjian
    Li, Ao
    Huang, Sheng
    Liu, Huijun
    Ge, Yongxin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (02) : 1966 - 1977
  • [24] Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
    Huang, Lang
    Zhang, Chao
    Zhang, Hongyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1362 - 1377
  • [25] Exploring Self-Supervised Learning for 3D Point Cloud Registration
    Yuan, Mingzhi
    Huang, Qiao
    Shen, Ao
    Huang, Xiaoshui
    Wang, Manning
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 25 - 31
  • [26] Label-free Monitoring of Self-Supervised Learning Progress
    Xu, Isaac
    Lowe, Scott
    Trappenberg, Thomas
    2022 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2022, : 78 - 84
  • [27] Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation
    Yuan, Jin
    Hou, Feng
    Du, Yangzhou
    Shi, Zhongchao
    Geng, Xin
    Fan, Jianping
    Rui, Yong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3907 - 3916
  • [28] Efficient Personalized Speech Enhancement Through Self-Supervised Learning
    Sivaraman, Aswin
    Kim, Minje
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1342 - 1356
  • [29] Self-Supervised Feature Learning for Appliance Recognition in Non-Intrusive Load Monitoring
    Liu, Yinyan
    Bai, Lei
    Ma, Jin
    Wang, Wei
    Ouyang, Wanli
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (02) : 1698 - 1710
  • [30] Self-Supervised Speech Representation Learning: A Review
    Mohamed, Abdelrahman
    Lee, Hung-yi
    Borgholt, Lasse
    Havtorn, Jakob D.
    Edin, Joakim
    Igel, Christian
    Kirchhoff, Katrin
    Li, Shang-Wen
    Livescu, Karen
    Maaloe, Lars
    Sainath, Tara N.
    Watanabe, Shinji
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1179 - 1210