DART: : A Solution for decentralized federated learning model robustness analysis

被引:0
作者
Feng, Chao [1 ]
Celdran, Alberto Huertas [1 ]
von der Assen, Jan [1 ]
Beltran, Enrique Tomas Martinez [2 ]
Bovet, Gerome [3 ]
Stiller, Burkhard [1 ]
机构
[1] Univ Zurich UZH, Dept Informat IfI, Commun Syst Grp CSG, CH-8050 Zurich, Switzerland
[2] Univ Murcia, Dept Informat & Commun Engn, Murcia 30100, Spain
[3] Armasuisse Sci & Technol, Cyber Def Campus, CH-3602 Thun, Switzerland
关键词
Decentralized federated learning; Poisoning attack; Cybersecurity; Model robustness; TAXONOMY; ATTACKS; PRIVACY;
D O I
10.1016/j.array.2024.100360
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client-server boundary and enables all participants to engage in model training aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called DART is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve robustness of DFL models for future research.
引用
收藏
页数:20
相关论文
共 75 条
  • [1] Abadi M, 2016, Arxiv, DOI [arXiv:1605.08695, DOI 10.48550/ARXIV.1605.08695]
  • [2] Federated Learning for Privacy Preservation in Smart Healthcare Systems: A Comprehensive Survey
    Ali, Mansoor
    Naeem, Faisal
    Tariq, Muhammad
    Kaddoum, Georges
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (02) : 778 - 789
  • [3] [Anonymous], 2018, European national laws implementing the Data Protection Directive
  • [4] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [5] Fedstellar: A Platform for Decentralized Federated Learning
    Beltran, Enrique Tomas Martinez
    Gomez, angel Luis Perales
    Feng, Chao
    Sanchez, Pedro Miguel
    Bernal, Sergio Lopez
    Bovet, Gerome
    Perez, Manuel Gil
    Perez, Gregorio Martinez
    Celdran, Alberto Huertas
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 242
  • [6] Benmalek Mourad, 2022, Revue d'Intelligence Artificielle, V36, P49, DOI 10.18280/ria.360106
  • [7] Blanchard P, 2017, ADV NEUR IN, V30
  • [8] Achieving security and privacy in federated learning systems: Survey, research challenges and future directions
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Martinez, Sergio
    Sanchez, David
    Flanagan, Adrian
    Tan, Kuan Eeik
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 106
  • [9] Bourtoule L, 2021, P IEEE S SECUR PRIV, P141, DOI 10.1109/SP40001.2021.00019
  • [10] Moving target defense: state of the art and characteristics
    Cai, Gui-lin
    Wang, Bao-sheng
    Hu, Wei
    Wang, Tian-zuo
    [J]. FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2016, 17 (11) : 1122 - 1153