Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection

被引:20
作者
Lai, Yuan-Cheng [1 ]
Lin, Jheng-Yan [1 ]
Lin, Ying-Dar [2 ]
Hwang, Ren-Hung [3 ]
Lin, Po-Chin [4 ]
Wu, Hsiao-Kuang [5 ]
Chen, Chung-Kuan [6 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Dept Informat Management, Taipei, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Coll Artificial Intelligence, Tainan, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi, Taiwan
[5] Natl Cent Univ, Dept Comp Sci & Informat Engn, Taoyuan, Taiwan
[6] Cycraft Technol, Taipei, Taiwan
关键词
Federated Learning; Intrusion Detection; Poisoning Attack; Backdoor Attack; Local Outlier Factor;
D O I
10.1016/j.cose.2023.103205
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20 similar to 64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.
引用
收藏
页数:14
相关论文
共 33 条
  • [1] Al-Marri N.A.A.-A., 2020, INT BLACK SEA C COMM, P1, DOI [10.1109/BlackSeaCom48709.2020.9234959, DOI 10.1109/BLACKSEACOM48709.2020.9234959]
  • [2] Application of Local Outlier Factor Algorithm to Detect Anomalies in Computer Network
    Auskalnis, Juozas
    Paulauskas, Nerijus
    Baskys, Algirdas
    [J]. ELEKTRONIKA IR ELEKTROTECHNIKA, 2018, 24 (03) : 96 - 99
  • [3] Walling up Backdoors in Intrusion Detection Systems
    Bachl, Maximilian
    Hartl, Alexander
    Fabini, Joachim
    Zseby, Tanja
    [J]. BIG-DAMA'19: PROCEEDINGS OF THE 3RD ACM CONEXT WORKSHOP ON BIG DATA, MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE FOR DATA COMMUNICATION NETWORKS, 2019, : 8 - 13
  • [4] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [5] Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
  • [6] Biggio B., 2012, ICML, V2, P1807
  • [7] LOF: Identifying density-based local outliers
    Breunig, MM
    Kriegel, HP
    Ng, RT
    Sander, J
    [J]. SIGMOD RECORD, 2000, 29 (02) : 93 - 104
  • [8] Intrusion Detection for Wireless Edge Networks Based on Federated Learning
    Chen, Zhuo
    Lv, Na
    Liu, Pengfei
    Fang, Yu
    Chen, Kun
    Pan, Wu
    [J]. IEEE ACCESS, 2020, 8 (08): : 217463 - 217472
  • [9] FedSA: Accelerating Intrusion Detection in Collaborative Environments with Federated Simulated Annealing
    Cunha Neto, Helio N.
    Dusparic, Ivana
    Mattos, Diogo M. F.
    Fernande, Natalia C.
    [J]. PROCEEDINGS OF THE 2022 IEEE 8TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION (NETSOFT 2022): NETWORK SOFTWARIZATION COMING OF AGE: NEW CHALLENGES AND OPPORTUNITIES, 2022, : 420 - 428
  • [10] Mitigating Data Poisoning Attacks On a Federated Learning-Edge Computing Network
    Doku, Ronald
    Rawat, Danda B.
    [J]. 2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,