Privacy preserving Federated Learning framework for IoMT based big data analysis using edge computing

被引:49
作者
Nair, Akarsh K. [1 ]
Sahoo, Jayakrushna [1 ]
Raj, Ebin Deni [1 ]
机构
[1] Indian Inst Informat Technol, Kottayam, India
关键词
Federated learning; Edge computing; IoMT; Privacy preservation; Anonymity; Encryption; BLOCKCHAIN; INTERNET;
D O I
10.1016/j.csi.2023.103720
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The current industrial scenario has witnessed the application of several artificial intelligence-based technologies for mining and processing IoMT-based big data. An emerging distributed machine learning paradigm, Federated Learning (FL), has been widely applied in IoMT-based systems as a measure to overcome the issues associated with incorporating AI into such lightweight distributed computing systems while tackling privacy issues as well. However, extensive research has identified that classical FL is still prone to privacy threats due to data leakage and the chances of adversarial attacks during gradient transfer operations. Inspired by these issues, we propose a privacy-preserving framework (Fed_select) that ensures user anonymity in IoMT-based environments for big data analysis under the FL scheme. Fed_Select utilizes alternative minimization to limit gradients and participants in system training to decrease points of system vulnerability. The framework works on an edge computing-based architecture which ensures user anonymity via the employment of hybrid encryption techniques along with added benefits of load reduction at the central server. Also, a Laplacian noise-based differential privacy is employed on the shared attributes for security enhancement that adds confidentiality to the transferred data even during adversarial scenarios. Experimental results on standard datasets showcase that the change in the volume of gradients shared and the number of participants is not proportional to the variation in various system performance parameters. Specifically, an idealistic range of client and gradient-sharing fractions along with the appropriate value of noise for differential privacy implementation is determined. Additionally, we analyze the system from a security perspective as well as compare it with other schemes.
引用
收藏
页数:20
相关论文
共 77 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Bidirectional Backpropagation [J].
Adigun, Olaoluwa ;
Kosko, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (05) :1982-1994
[3]  
Ali M., 2022, arXiv, DOI DOI 10.48550/ARXIV.2203.09702
[4]  
Beranger J, 2021, DIGITAL REVOLUTION H
[5]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[6]   Privacy-preserving Federated Deep Learning for Wearable IoT-based Biomedical Monitoring [J].
Can, Yekta Said ;
Ersoy, Cem .
ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2021, 21 (01)
[7]   Efficient privacy preservation of big data for accurate data mining [J].
Chamikara, M. A. P. ;
Bertok, P. ;
Liu, D. ;
Camtepe, S. ;
Khalil, I .
INFORMATION SCIENCES, 2020, 527 :420-443
[8]   Healthchain: A novel framework on privacy preservation of electronic health records using blockchain technology [J].
Chenthara, Shekha ;
Ahmed, Khandakar ;
Wang, Hua ;
Whittaker, Frank ;
Chen, Zhenxiang .
PLOS ONE, 2020, 15 (12)
[9]  
Dwork C., 2006, DIFFERENTIAL PRIVACY, V26
[10]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284