Source-Free Domain Adaptation With Self-Supervised Learning for Nonintrusive Load Monitoring

被引:1
|
作者
Zhong, Feichi [1 ]
Shan, Zihan [1 ]
Si, Gangquan [1 ]
Liu, Aoming [2 ]
Zhao, Gerui [1 ]
Li, Bo [1 ]
机构
[1] Xi An Jiao Tong Univ, Res Ctr Informat Fus & Intelligent Control, Sch Elect Engn, Xian 710115, Peoples R China
[2] Boston Univ, Dept Comp Sci, Boston, MA 02215 USA
关键词
Adaptation models; Transfer learning; Feature extraction; Training; Load monitoring; Data models; Aggregates; Self-supervised learning; Load modeling; Hidden Markov models; Deep learning (DL); nonintrusive load monitoring (NILM); self-supervised learning; source-free domain adaptation (SFDA); NEURAL-NETWORKS; DISAGGREGATION;
D O I
10.1109/TIM.2024.3480230
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nonintrusive load monitoring (NILM) benefits the planning of energy consumption and time-of-use pricing through disaggregating appliance-level electrical information. However, its widespread adoption and rapid application face significant restrictions and challenges. Variations in energy consumption backgrounds, like user habits and appliance brands, result in substantial distribution disparities in load data, which significantly deteriorate the performance of trained models when applied to new scenarios. Moreover, concerns regarding user privacy and costs further impede the collection of load data when transfer training for adaptability is necessary. To address these issues, we propose a source-free domain adaptation (SFDA) method for NILM to enhance the generalization performance under conditions of severely limited data acquisition. We design a self-supervised subnetwork based on a sequence masking-restoration task to learn domain-invariant features of appliances without the utilization of source-domain dataset and target-domain label data. Furthermore, the entropy minimization and representation subspace distance (RSD) are introduced to align the feature spaces of different domains and mitigate the feature scaling effect on model performance. The cross-house and a cross-dataset adaptation experiment are conducted on four publicly available datasets. The proposed method achieves an average 6.6% improvement in MAE and 7.1% in F1-score over the baseline and performs well compared to other state-of-the-art models using additional training data, which proves the great potential of the proposed method to enhance the generalization with data restrictions.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Robust self-supervised learning for source-free domain adaptation
    Liang Tian
    Lihua Zhou
    Hao Zhang
    Zhenbin Wang
    Mao Ye
    Signal, Image and Video Processing, 2023, 17 : 2405 - 2413
  • [2] Robust self-supervised learning for source-free domain adaptation
    Tian, Liang
    Zhou, Lihua
    Zhang, Hao
    Wang, Zhenbin
    Ye, Mao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 2405 - 2413
  • [3] Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation
    Chen, Weijie
    Lin, Luojun
    Yang, Shicai
    Xie, Di
    Pu, Shiliang
    Zhuang, Yueting
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10185 - 10192
  • [4] Nonintrusive Load Monitoring Based on Self-Supervised Learning
    Chen, Shuyi
    Zhao, Bochao
    Zhong, Mingjun
    Luan, Wenpeng
    Yu, Yixin
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [5] Source-Free Domain Adaptation with Contrastive Domain Alignment and Self-supervised Exploration for Face Anti-spoofing
    Liu, Yuchen
    Chen, Yabo
    Dai, Wenrui
    Gou, Mengran
    Huang, Chun-Ting
    Xiong, Hongkai
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 511 - 528
  • [6] SS-SFDA : Self-Supervised Source-Free Domain Adaptation for Road Segmentation in Hazardous Environments
    Kothandaraman, Divya
    Chandra, Rohan
    Manocha, Dinesh
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3042 - 3052
  • [7] A contrastive self-supervised learning method for source-free EEG emotion recognition
    Wang, Yingdong
    Ruan, Qunsheng
    Wu, Qingfeng
    Wang, Shuocheng
    USER MODELING AND USER-ADAPTED INTERACTION, 2025, 35 (01)
  • [8] Self-Supervised Learning for Domain Adaptation on Point Clouds
    Achituve, Idan
    Maron, Haggai
    Chechik, Gal
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 123 - 133
  • [9] Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology
    Vray, Guillaume
    Tomar, Devavrat
    Bozorgtabar, Behzad
    Thiran, Jean-Philippe
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (05) : 2021 - 2032
  • [10] Self-training transformer for source-free domain adaptation
    Yang, Guanglei
    Zhong, Zhun
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16560 - 16574