Blockchain-Based Federated Learning With SMPC Model Verification Against Poisoning Attack for Healthcare Systems

被引:33
作者
Kalapaaking, Aditya Pribadi [1 ]
Khalil, Ibrahim [2 ]
Yi, Xun [3 ]
机构
[1] RMIT Univ, Dept Comp Sci, Melbourne, Vic 3000, Australia
[2] Royal Melbourne Inst Technol RMIT Univ, Distributed Syst & Networking, Melbourne, Vic 3000, Australia
[3] RMIT Univ, Sch Comp Sci & Informat Technol, Melbourne, Vic 3000, Australia
基金
澳大利亚研究理事会;
关键词
Federated learning; secure multi-party computation; blockchain; poisoning attack; encrypted inference; healthcare systems; PRIVACY;
D O I
10.1109/TETC.2023.3268186
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the rising awareness of privacy and security in machine learning applications, federated learning (FL) has received widespread attention and applied to several areas, e.g., intelligence healthcare systems, IoT-based industries, and smart cities. FL enables clients to train a global model collaboratively without accessing their local training data. However, the current FL schemes are vulnerable to adversarial attacks. Its architecture makes detecting and defending against malicious model updates difficult. In addition, most recent studies to detect FL from malicious updates while maintaining the model's privacy have not been sufficiently explored. This article proposed blockchain-based federated learning with SMPC model verification against poisoning attacks for healthcare systems. First, we check the machine learning model from the FL participants through an encrypted inference process and remove the compromised model. Once the participants' local models have been verified, the models are sent to the blockchain node to be securely aggregated. We conducted several experiments with different medical datasets to evaluate our proposed framework.
引用
收藏
页码:269 / 280
页数:12
相关论文
共 30 条
[1]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[2]  
Doganay M., 2008, Proceedings of international workshop on Privacy and anonymity in information society, P3
[3]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[4]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[5]   VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning [J].
Guo, Xiaojie ;
Liu, Zheli ;
Li, Jin ;
Gao, Jiqiang ;
Hou, Boyu ;
Dong, Changyu ;
Baker, Thar .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :1736-1751
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Kingma Diederik P, 2014, ARXIV PREPRINT ARXIV
[8]   Blockchain-Federated-Learning and Deep Learning Models for COVID-19 Detection Using CT Imaging [J].
Kumar, Rajesh ;
Khan, Abdullah Aman ;
Kumar, Jay ;
Zakria ;
Golilarz, Noorbakhsh Amiri ;
Zhang, Simin ;
Ting, Yang ;
Zheng, Chengyu ;
Wang, Wenyong .
IEEE SENSORS JOURNAL, 2021, 21 (14) :16301-16314
[9]  
Li X, 2020, Arxiv, DOI arXiv:1907.02189
[10]   Privacy-Enhanced Federated Learning Against Poisoning Adversaries [J].
Liu, Xiaoyuan ;
Li, Hongwei ;
Xu, Guowen ;
Chen, Zongqi ;
Huang, Xiaoming ;
Lu, Rongxing .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :4574-4588