FedCCW: a privacy-preserving Byzantine-robust federated learning with local differential privacy for healthcare

被引:0
作者
Zhang, Lianfu [1 ]
Fang, Guangwei [1 ]
Tan, Zuowen [2 ]
机构
[1] Yichun Univ, Coll Math & Computat Sci, Yichun 336000, Peoples R China
[2] Jiangxi Univ Finance & Econ, Sch Comp & Artificial Intelligence, Dept Cyberspace Secur, Nanchang 330013, Peoples R China
来源
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS | 2025年 / 28卷 / 03期
基金
中国国家自然科学基金;
关键词
Federated learning; Byzantine attacks; Privacy-preserving; Cosine similarity; Spectral clustering; FOUNDATIONS; MODEL;
D O I
10.1007/s10586-024-04894-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The integration of artificial intelligence technology in the medical sector has led to the accumulation of substantial medical data by healthcare institutions, to utilize this data to train high-quality deep learning models to aid in medical diagnosis. However, the sensitive nature of medical data has posed challenges in data fusion. Federated Learning (FL) has emerged as a prominent approach due to its ability to train models without direct access to raw data. Nonetheless, research indicates that FL still faces the risk of privacy breaches, and during model aggregation, it may be vulnerable to various Byzantine attacks. In this study, we design the FedCCW, a novel FL scheme with Byzantine robustness and privacy preservation based on the Clipping, Clustering, and Weighting mechanism, to enable collaboration among medical institutions and facilitate the integration of medical data. The Differential Privacy (DP) noise mechanism is adopted to obfuscate local training gradients of participants against privacy breaches during FL. Additionally, a clustering mechanism is utilized to categorize participants into groups, thereby identifying and filtering out malicious updates that deviate from the intended aggregation path. A dynamic clipping method is designed to prevent attackers from manipulating the server's cosine similarity and spectral clustering mechanisms by artificially inflating updates without altering their direction, thereby enhancing the accuracy of the global model. An adaptive weighting method is also introduced to dynamically adjust participant weights, thereby expediting model convergence. Extensive experiments conducted on authentic medical datasets demonstrate the superior performance of FedCCW in comparison to existing methods.
引用
收藏
页数:21
相关论文
共 45 条
[1]   Insights into Internet of Medical Things (IoMT): Data fusion, security issues and potential solutions [J].
Ahmed, Shams Forruque ;
Bin Alam, Md. Sakib ;
Afrin, Shaila ;
Rafa, Sabiha Jannat ;
Rafa, Nazifa ;
Gandomi, Amir H. .
INFORMATION FUSION, 2024, 102
[2]  
Andrew G, 2021, ADV NEUR IN, V34
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Blanchard P, 2017, ADV NEUR IN, V30
[5]  
Cao XY, 2022, Arxiv, DOI [arXiv:2012.13995, DOI 10.48550/ARXIV.2012.13995]
[6]   A Robust Shape-Aware Rib Fracture Detection and Segmentation Framework With Contrastive Learning [J].
Cao, Zheng ;
Xu, Liming ;
Chen, Danny Z. ;
Gao, Honghao ;
Wu, Jian .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :1584-1591
[7]  
Chang HY, 2019, Arxiv, DOI arXiv:1912.11279
[8]   Privacy-Preserving Federated Learning via Functional Encryption, Revisited [J].
Chang, Yansong ;
Zhang, Kai ;
Gong, Junqing ;
Qian, Haifeng .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :1855-1869
[9]   Channel Adaptive and Sparsity Personalized Federated Learning for Privacy Protection in Smart Healthcare Systems [J].
Chen, Ziqi ;
Du, Jun ;
Hou, Xiangwang ;
Yu, Keping ;
Wang, Jintao ;
Han, Zhu .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (06) :3248-3257
[10]   FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority [J].
Dong, Ye ;
Chen, Xiaojun ;
Li, Kaiyun ;
Wang, Dakui ;
Zeng, Shuai .
COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 :497-518