A Trojan Attack Against Smart Grid Federated Learning and Countermeasures

被引:0
|
作者
Bondok, Atef H. [1 ]
Badr, Mahmoud M. [2 ,3 ]
Mahmoud, Mohamed M. E. A. [4 ]
El-Toukhy, Ahmed T. [5 ,6 ]
Alsabaan, Maazen [7 ]
Amsaad, Fathi [8 ]
Ibrahem, Mohamed I. [3 ,9 ]
机构
[1] Eastern Connecticut State Univ, Dept Comp Sci, Willimantic, CT 06226 USA
[2] SUNY Polytech Inst, Coll Engn, Dept Network & Comp Secur, Utica, NY 13502 USA
[3] Benha Univ, Fac Engn Shoubra, Dept Elect Engn, Cairo 11672, Egypt
[4] Tennessee Technol Univ, Dept Elect & Comp Engn, Cookeville, TN 38505 USA
[5] Univ South Carolina Aiken, Dept Comp Sci & Engn, Aiken, SC 29801 USA
[6] Al Azhar Univ, Fac Engn, Dept Elect Engn, Cairo 11884, Egypt
[7] King Saud Univ, Coll Comp & Informat Sci, Dept Comp Engn, Riyadh 11451, Saudi Arabia
[8] Wright State Univ, Dept Comp Sci & Engn, Dayton, OH 45435 USA
[9] Augusta Univ, Sch Comp & Cyber Sci, Augusta, GA 30912 USA
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Electricity; Trojan horses; Training; Data models; Servers; Detectors; Smart grids; Privacy; Federated learning; Load modeling; security; smart power grid; Trojan attacks; ELECTRICITY THEFT DETECTION; EFFICIENT; SCHEME; SECURE;
D O I
10.1109/ACCESS.2024.3515099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In smart power grid, consumers can hack their smart meters to report low electricity consumption readings to reduce their bills launching electricity theft cyberattacks. This study investigates a Trojan attack in federated learning of a detector for electricity theft. In this attack, dishonest consumers train the detector on false data to later bypass detection, without degrading the detector's overall performance. We propose three defense strategies: Redundancy, Med-Selection and Combined-Selection. In the Redundancy approach, redundant consumers with similar consumption patterns are included in the federated learning process, so their correct data offsets the attackers' false data when the local models are aggregated. Med-Selection selects the median model parameters of consumers with similar usage patterns to reduce outlier influence. In Combined-Selection, we compare gradients from consumers with same consumption patterns to the median of all local models, leveraging the fact that honest consumers' gradients are closer to the median while malicious ones deviate. Our experiments using real-world data show the Trojan attack's success rate can reach 90%. However, our defense methods reduce the attack success rate to about 7%, 4%, and 3.3% for Redundancy, Med-Selection, and Combined-Selection, respectively, when 10% of consumers are malicious.
引用
收藏
页码:191828 / 191846
页数:19
相关论文
共 50 条
  • [1] Securing One-Class Federated Learning Classifiers Against Trojan Attacks in Smart Grid
    Bondok, Atef H.
    Badr, Mahmoud
    Mahmoud, Mohamed
    Alsabaan, Maazen
    Fouda, Mostafa M.
    Abdullah, Mohamed
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (04): : 4006 - 4021
  • [2] Repetitive Backdoor Attacks and Countermeasures for Smart Grid Reinforcement Incremental Learning
    Eltoukhy, Ahmed T.
    Badr, Mahmoud M.
    Elgarhy, Islam
    Mahmoud, Mohamed
    Alsabaan, Maazen
    Alshawi, Tariq
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (03): : 3089 - 3104
  • [3] A Distillation-based Attack Against Adversarial Training Defense for Smart Grid Federated Learning
    Bondok, Atef H.
    Mahmoud, Mohamed
    Badr, Mahmoud M.
    Fouda, Mostafa M.
    Alsabaan, Maazen
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 963 - 968
  • [4] Decaf: Data Distribution Decompose Attack Against Federated Learning
    Dai, Zhiyang
    Gao, Yansong
    Zhou, Chunyi
    Fu, Anmin
    Zhang, Zhi
    Xue, Minhui
    Zheng, Yifeng
    Zhang, Yuqing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 405 - 420
  • [5] Practical Attribute Reconstruction Attack Against Federated Learning
    Chen, Chen
    Lyu, Lingjuan
    Yu, Han
    Chen, Gang
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 851 - 863
  • [6] A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks in Smart Grid
    Li, Xiumin
    Wen, Mi
    He, Siying
    Lu, Rongxing
    Wang, Liangliang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16805 - 16816
  • [7] Analyzing User-Level Privacy Attack Against Federated Learning
    Song, Mengkai
    Wang, Zhibo
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    Ren, Ju
    Qi, Hairong
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (10) : 2430 - 2444
  • [8] CapsuleBD: A Backdoor Attack Method Against Federated Learning Under Heterogeneous Models
    Liao, Yuying
    Zhao, Xuechen
    Zhou, Bin
    Huang, Yanyi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 4071 - 4086
  • [9] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644
  • [10] Privacy-Enhanced Federated Learning Against Poisoning Adversaries
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Chen, Zongqi
    Huang, Xiaoming
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 4574 - 4588