Increasing trust in AI through privacy preservation and model explainability: Federated Learning of Fuzzy Regression Trees

被引:8
作者
Barcena, Jose Luis Corcuera [1 ]
Ducange, Pietro [1 ]
Marcelloni, Francesco [1 ]
Renda, Alessandro [1 ]
机构
[1] Univ Pisa, Dept Informat Engn, Largo L Lazzarino,1, I-56122 Pisa, Italy
关键词
Federated Learning; Fuzzy Regression Trees; Regression models; Explainable Artificial Intelligence; SYSTEMS; RULE;
D O I
10.1016/j.inffus.2024.102598
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) lets multiple data owners collaborate in training a global model without any violation of data privacy, which is a crucial requirement for enhancing users' trust in Artificial Intelligence (AI) systems. Despite the significant momentum recently gained by the FL paradigm, most of the existing approaches in the field neglect another key pillar for the trustworthiness of AI systems, namely explainability. In this paper, we propose a novel approach for FL of fuzzy regression trees (FRTs), which are generally acknowledged as highly interpretable by-design models. The proposed FL procedure is designed for the scenario of horizontally partitioned data and is based on the transmission of aggregated statistics from the clients to a central server for the tree induction procedure. It is shown that the proposed approach faithfully approximates the ideal case in which the tree induction algorithm is applied on the union of all local datasets, while still ensuring privacy preservation. Furthermore, the FL approach brings benefits, in terms of generalization capability, compared to the local learning setting in which each participant learns its own FRT based only on the private, local, dataset. The adoption of linear models in the leaf nodes ensures a competitive level of performance, as assessed by an extensive experimental campaign on benchmark datasets. The analysis of the results covers both the aspects of accuracy and interpretability of FRT. Finally, we discuss the application of the proposed federated FRT to the task of Quality of Experience forecasting in an automotive case-study.
引用
收藏
页数:15
相关论文
共 53 条
[1]  
Aich S, 2021, INT CONF ADV COMMUN, P109, DOI 10.23919/ICACT51234.2021.9370566
[2]   A Multiobjective Evolutionary Approach to Concurrently Learn Rule and Data Bases of Linguistic Fuzzy-Rule-Based Systems [J].
Alcala, Rafael ;
Ducange, Pietro ;
Herrera, Francisco ;
Lazzerini, Beatrice ;
Marcelloni, Francesco .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2009, 17 (05) :1106-1122
[3]  
Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
[4]   Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence [J].
Ali, Sajid ;
Abuhmed, Tamer ;
El-Sappagh, Shaker ;
Muhammad, Khan ;
Alonso-Moral, Jose M. ;
Confalonieri, Roberto ;
Guidotti, Riccardo ;
Del Ser, Javier ;
Diaz-Rodriguez, Natalia ;
Herrera, Francisco .
INFORMATION FUSION, 2023, 99
[5]  
[Anonymous], 2022, CEUR WORKSHOP P, DOI [10.5281/zenodo.7024541, DOI 10.5281/ZENODO.7024541]
[6]  
Anthony Reina G., 2021, arXiv
[7]   Genetic Training Instance Selection in Multiobjective Evolutionary Fuzzy Systems: A Coevolutionary Approach [J].
Antonelli, Michela ;
Ducange, Pietro ;
Marcelloni, Francesco .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2012, 20 (02) :276-290
[8]   Federated Survival Forests [J].
Archetti, Alberto ;
Matteucci, Matteo .
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
[9]   FedPacket: A Federated Learning Approach to Mobile Packet Classification [J].
Bakopoulou, Evita ;
Tillman, Balint ;
Markopoulou, Athina .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (10) :3609-3628
[10]   Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson's Disease Progression [J].
Barcena, Jose Luis Corcuera ;
Ducange, Pietro ;
Marcelloni, Francesco ;
Renda, Alessandro ;
Ruffini, Fabrizio .
EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 :630-648