FIDES: An ontology-based approach for making machine learning systems accountable

被引:3
作者
Fernandez, Izaskun [1 ]
Aceta, Cristina [1 ]
Gilabert, Eduardo [1 ]
Esnaola-Gonzalez, Iker [2 ]
机构
[1] Basque Res & Technol Alliance BRTA, TEKNIKER, Parke Teknol ,C-Inaki Goenaga 5, Eibar 20600, Spain
[2] BAS Digital Solut SL, P Castellana 77, Planta 14, Madrid 28046, Spain
来源
JOURNAL OF WEB SEMANTICS | 2023年 / 79卷
基金
美国国家卫生研究院; 欧盟地平线“2020”;
关键词
Accountability; Ontology; Trustworthy artificial intelligence; Machine learning; ENERGY EFFICIENCY;
D O I
10.1016/j.websem.2023.100808
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although the maturity of technologies based on Artificial Intelligence (AI) is rather advanced nowadays, their adoption, deployment and application are not as wide as it could be expected. This could be attributed to many barriers, among which the lack of trust of users stands out. Accountability is a relevant factor to progress in this trustworthiness aspect, as it allows to determine the causes that derived a given decision or suggestion made by an AI system. This article focuses on the accountability of a specific branch of AI, statistical machine learning (ML), based on a semantic approach. FIDES, an ontology-based approach towards achieving the accountability of ML systems is presented, where all the relevant information related to a ML-based model is semantically annotated, from the dataset and model parametrisation to deployment aspects, to be exploited later to answer issues related to reproducibility, replicability, definitely, accountability. The feasibility of the proposed approach has been demonstrated in two scenarios, real-world energy efficiency and manufacturing, and it is expected to pave the way towards raising awareness about the potential of Semantic Technologies in different factors that may be key in the trustworthiness of AI-based systems.
引用
收藏
页数:13
相关论文
共 56 条
[1]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[2]   Supply chain risk management and artificial intelligence: state of the art and future research directions [J].
Baryannis, George ;
Validi, Sahar ;
Dani, Samir ;
Antoniou, Grigoris .
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2019, 57 (07) :2179-2202
[3]  
Beaudouin V., 2020, Flexible and context-specific AI explainability: A multidisciplinary approach, DOI [10.2139/ssrn.3559477, DOI 10.2139/SSRN.3559477]
[4]   Acceptance of Healthcare Robots for the Older Population: Review and Future Directions [J].
Broadbent, E. ;
Stafford, R. ;
MacDonald, B. .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2009, 1 (04) :319-330
[5]  
Brooke J., 1996, Usability Evaluation in Industry, V189, P4
[6]   Does projection into use improve trust and exploration? An example with a cruise control system [J].
Cahour, Beatrice ;
Forzy, Jean-Francois .
SAFETY SCIENCE, 2009, 47 (09) :1260-1270
[7]  
Chari Shruthi, 2020, The Semantic Web - ISWC 2020. 19th International Semantic Web Conference. Lecture Notes in Computer Science (LNCS 12507), P228, DOI 10.1007/978-3-030-62466-8_15
[8]  
Chari S, 2020, STUD SEMANTIC WEB, V47, P23, DOI 10.3233/SSW200010
[9]   Embracing the sobering reality of technological influences on jobs, employment and human resource development: A systematic literature review [J].
Chuang, Szufang ;
Graham, Carroll Marion .
EUROPEAN JOURNAL OF TRAINING AND DEVELOPMENT, 2018, 42 (7/8) :400-416
[10]  
Chui M., 2018, AI adoption advances, but foundational barriers remain