Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson's Disease Progression

被引:2
作者
Barcena, Jose Luis Corcuera [1 ]
Ducange, Pietro [1 ]
Marcelloni, Francesco [1 ]
Renda, Alessandro [1 ]
Ruffini, Fabrizio [1 ]
机构
[1] Univ Pisa, Dept Informat Engn, Largo Lucio Lazzarino 1, I-56122 Pisa, Italy
来源
EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I | 2023年 / 1901卷
关键词
Federated Learning; Explainable Artificial Intelligence; Linguistic Fuzzy Models; FED-XAI; Parkinson; PRIVACY;
D O I
10.1007/978-3-031-44064-9_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Services based on Artificial Intelligence (AI) are becoming increasingly pervasive in our society. At the same time, however, we are also witnessing a growing awareness towards the ethical aspects and the trustworthiness of AI tools, especially in high stakes domains, such as the healthcare one. In this paper, we propose the adoption of AI techniques for predicting Parkinson's Disease progression with the overarching aim of accommodating the urgent need for trustworthiness. We address two key requirements towards trustworthy AI, namely privacy preservation in learning AI models and their explainability. As for the former aspect, we consider the (rather common) case of medical data coming from different health institutions, assuming that they cannot be shared due to privacy concerns. To address this shortcoming, we leverage federated learning (FL) as a paradigm for collaborative model training among multiple parties without any disclosure of private raw data. As for the latter aspect, we focus on highly interpretable models, i.e., those for which humans are able to understand how decisions have been taken. An extensive experimental analysis carried out on a well-known Parkinson Telemonitoring dataset highlights how the proposed approach based on FL of fuzzy rule-based systems allows achieving, simultaneously, data privacy and interpretability. Results are reported for different data partitioning scenarios, also comparing the interpretable-by-design model with an opaque neural network model.
引用
收藏
页码:630 / 648
页数:19
相关论文
共 45 条
[1]  
Aledhari M, 2020, IEEE ACCESS, V8, P140699, DOI [10.1109/ACCESS.2020.3013541, 10.1109/access.2020.3013541]
[2]  
[Anonymous], 2022, Cost of a Data Breach report
[3]  
Barcena J.L.C., 2022, XAI IT 2022
[4]   An Approach to Federated Learning of Explainable Fuzzy Regression Models [J].
Barcena, Jose Luis Corcuera ;
Ducange, Pietro ;
Ercolani, Alessio ;
Marcelloni, Francesco ;
Renda, Alessandro .
2022 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2022,
[5]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[6]   EVFL: An explainable vertical federated learning for data-oriented Artificial Intelligence systems [J].
Chen, Peng ;
Du, Xin ;
Lu, Zhihui ;
Wu, Jie ;
Hung, Patrick C. K. .
JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 126
[7]   Are quantitative and clinical measures of bradykinesia related in advanced Parkinson's disease? [J].
Daneault, Jean-Francois ;
Carignan, Benoit ;
Sadikot, Abbas F. ;
Duval, Christian .
JOURNAL OF NEUROSCIENCE METHODS, 2013, 219 (02) :220-223
[8]  
Dastjerd Niousha Karimi, 2019, Curr Aging Sci, V12, P100, DOI 10.2174/1874609812666190625140311
[9]  
Dipro Sumit Howlader, 2022, 2022 25th International Conference on Computer and Information Technology (ICCIT), P139, DOI 10.1109/ICCIT57492.2022.10055787
[10]  
Ethics Guidelines for Trustworthy AI, 2019, Technical Report