Mitigate Data Poisoning Attack by Partially Federated Learning

被引:0
|
作者
Dam, Khanh Huu The [1 ]
Legay, Axel [1 ]
机构
[1] UCLouvain, Louvain, Belgium
来源
18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023 | 2023年
关键词
Data poisoning attack; Federated Learning; Data Privacy; Malware detection;
D O I
10.1145/3600160.3605032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An effcient machine learning model for malware detection requires a large dataset to train. Yet it is not easy to collect such a large dataset without violating or leaving vulnerable to potential viola-tion various aspects of data privacy. Our work proposes a federated learning framework that permits multiple parties to collaborate on learning behavioral graphs for malware detection. Our proposed graph classification framework allows the participating parties to freely decide their preferred classifier model without acknowledg-ing their preferences to the others involved. This mitigates the chance of any data poisoning attacks. In our experiments, our clas-sification model using the partially federated learning achieved the F1-score of 0.97, close to the performance of the centralized data training models. Moreover, the impact of the label flipping attack against our model is less than 0.02.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems
    Wang, Shuyi
    Zuccon, Guido
    PROCEEDINGS OF THE 2023 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2023, 2023, : 215 - 224
  • [42] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03) : 711 - 726
  • [43] GRNN: Generative Regression Neural Network-A Data Leakage Attack for Federated Learning
    Ren, Hanchi
    Deng, Jingjing
    Xie, Xianghua
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [44] Data distribution inference attack in federated learning via reinforcement learning support
    Yu, Dongxiao
    Zhang, Hengming
    Huang, Yan
    Xie, Zhenzhen
    HIGH-CONFIDENCE COMPUTING, 2025, 5 (01):
  • [45] DUPS: Data poisoning attacks with uncertain sample selection for federated learning
    Zhang, Heng-Ru
    Wang, Ke-Xiong
    Liang, Xiang-Yu
    Yu, Yi-Fan
    COMPUTER NETWORKS, 2025, 256
  • [46] Confident Federated Learning to Tackle Label Flipped Data Poisoning Attacks
    Ovi, Pretom Roy
    Gangopadhyay, Aryya
    Erbacher, Robert F.
    Busart, Carl
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [47] Demo: Visualizing the Shadows: Unveiling Data Poisoning Behaviors in Federated Learning
    Zhang, Xueqing
    Zhang, Junkai
    Chow, Ka-Ho
    Chen, Juntao
    Mao, Ying
    Rahouti, Mohamed
    Li, Xiang
    Liu, Yuchen
    Wei, Wenqi
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024, 2024, : 1431 - 1434
  • [48] Towards Federated Learning with Attention Transfer to Mitigate System and Data Heterogeneity of Clients
    Shi, Hongrui
    Radu, Valentin
    PROCEEDINGS OF THE 4TH INTERNATIONAL WORKSHOP ON EDGE SYSTEMS, ANALYTICS AND NETWORKING (EDGESYS'21), 2021, : 61 - 66
  • [49] Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?
    Lee, Seungjun
    Yu, Miri
    Yoon, Daegun
    Oh, Sangyoon
    2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW, 2023, : 799 - 808
  • [50] BPFL: Blockchain-based privacy-preserving federated learning against poisoning attack
    Ren, Yanli
    Hu, Mingqi
    Yang, Zhe
    Feng, Guorui
    Zhang, Xinpeng
    INFORMATION SCIENCES, 2024, 665