共 25 条
An Efficient Sequential Decentralized Federated Progressive Channel Pruning Strategy for Smart Grid Electricity Theft Detection
被引:0
作者:
Guo, Fanghong
[1
]
Li, Shengwei
[1
]
Yang, Hao
[1
]
Dong, Chen
[1
]
Chen, Yifang
[2
]
Li, Guoqi
[3
]
机构:
[1] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[2] Hangzhou Power Supply Co, State Grid Zhejiang Elect Power Co Ltd, Hangzhou 310016, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Data models;
Computational modeling;
Training;
Servers;
Biological system modeling;
Data privacy;
Electricity;
Adaptation models;
Costs;
Distributed databases;
Decentralized federated learning (FL);
electricity theft detection (ETD);
model pruning;
privacy protection;
smart grid;
ALGORITHM;
SECURITY;
D O I:
10.1109/TII.2024.3507183
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
This article aims to develop a lightweight, decentralized federated learning (FL)-based strategy for electricity theft detection (ETD). Different from most of the existing ETD solutions, which typically deploy centralized deep learning models, our proposed method utilizes well-pruned lightweight networks and operates in a completely decentralized manner while maintaining the performance of the ETD model. Specifically, to protect data privacy, a novel sequential decentralized FL (SDFL) framework was designed, eliminating the centralized parameter aggregation node in traditional FL. Each client communicates model parameters only with its neighbors and trains its model locally. In addition, to facilitate deployment on edge devices, model pruning techniques are integrated with the sequential transmission characteristics of the SDFL framework. A progressive channel pruning technique is proposed, gradually reducing the number of model channels during training to promote model compression and simplify field deployment. Experiments demonstrate that our strategy compressed the model floating point operations from 18.32 to 3.60M and reduced the number of parameters from 8.61 to 3.47M, while protecting user privacy, and maintaining good performance. Deployment results on the edge devices, i.e., Raspberry Pi, indicate that our proposed strategy reduces the model inference time from 329.35 to 141.50 s, enhancing the detection efficiency by 57.04%.
引用
收藏
页码:2393 / 2402
页数:10
相关论文
共 25 条