User Perception of Ontology-Based Explanations of AI Models

被引:0
作者
Agafonov, Anton [1 ]
Ponomarev, Andrew [1 ]
Smirnov, Alexander [1 ]
机构
[1] Russian Acad Sci, St Petersburg Fed Res Ctr, 14th Line 39, St Petersburg 199178, Russia
来源
COMPUTER-HUMAN INTERACTION RESEARCH AND APPLICATIONS, CHIRA 2024, PT II | 2025年 / 2371卷
基金
俄罗斯科学基金会;
关键词
XAI; Explainable AI; Ontology; Ontology-Based explanations; User study; Machine learning; Neural networks;
D O I
10.1007/978-3-031-83845-3_24
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
For using AI models in high-stake applications it is crucial for a decision maker to understand why the model came to a certain conclusion. Ontology-based explanation techniques of artificial neural networks aim to provide explanations adapted to domain vocabulary (which is encoded using ontology) in order to make them easier to interpret and reason about. However, few studies actually explore the perception of ontology-based explanations and their effectiveness with respect to more common explanation techniques for neural networks (e.g., LIME, GradCAM, etc.). The paper proposes two benchmark datasets with different task representations (tabular and graph) and a methodology to compare users' effectiveness of processing explanations, employing both objective (decision time, accuracy) and subjective metrics. The methodology and datasets were then used in a user study to compare several explanation representations (non-ontology-based and three ontology-based ones-textual representation of ontological inference, inference graph, and attributive). It was found that according to subjective evaluation, graph and textual explanations caused the least difficulty for the participants. Objective metrics vary with the size of the ontology, but inference graphs show good results in all the examined cases. Surprisingly, non-ontology-based explanations have almost the same positive effect on decision-making than ontology-based (although, a bit harder subjectively).
引用
收藏
页码:396 / 414
页数:19
相关论文
共 33 条
[1]  
Azzolin S, 2022, Arxiv, DOI arXiv:2210.07147
[2]  
Bellucci M., 2022, DEEPONTONLP WORKSH E
[3]   Deep GONet: self-explainable deep neural network based on Gene Ontology for phenotype prediction from gene expression data [J].
Bourgeais, Victoria ;
Zehraoui, Farida ;
Ben Hamdoune, Mohamed ;
Hanczar, Blaise .
BMC BIOINFORMATICS, 2021, 22 (SUPPL 10)
[4]   Towards Ontologically Explainable Classifiers [J].
Bourguin, Gregory ;
Lewandowski, Arnaud ;
Bouneffa, Mourad ;
Ahmad, Adeel .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 :472-484
[5]  
Burkart N, 2021, J ARTIF INTELL RES, V70, P245
[6]   Using ontologies to enhance human understandability of global post-hoc explanations of black-box models [J].
Confalonieri, Roberto ;
Weyde, Tillman ;
Besold, Tarek R. ;
Martin, Fermin Moscoso del Prado .
ARTIFICIAL INTELLIGENCE, 2021, 296
[7]   TREPAN Reloaded: A Knowledge-Driven Approach to Explaining Black-Box Models [J].
Confalonieri, Roberto ;
Weyde, Tillman ;
Besold, Tarek R. ;
del Prado Martin, Fermin Moscoso .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :2457-2464
[8]   A Framework for Explainable Deep Neural Models Using External Knowledge Graphs [J].
Daniels, Zachary A. ;
Frank, Logan D. ;
Menart, Christopher J. ;
Raymer, Michael ;
Hitzler, Pascal .
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
[9]  
Ribeiro MD, 2020, Arxiv, DOI arXiv:2012.12115
[10]   On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI-Three Challenges for Future Research [J].
Futia, Giuseppe ;
Vetro, Antonio .
INFORMATION, 2020, 11 (02)