Ontology-Based Post-Hoc Neural Network Explanations Via Simultaneous Concept Extraction

被引:1
作者
Ponomarev, Andrew [1 ]
Agafonov, Anton [1 ]
机构
[1] Russian Acad Sci, St Petersburg Fed Res Ctr, St Petersburg 199178, Russia
来源
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2023 | 2024年 / 823卷
基金
俄罗斯科学基金会;
关键词
Ontology; Explainable AI; Knowledge; Black-box; Convolutional neural network; Interpretability; Concept extraction; Neuro-symbolic AI;
D O I
10.1007/978-3-031-47724-9_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ontology-based explanation techniques provide an explanation on how a neural network came to a particular conclusion using human-understandable terms and their formal definitions, which are encoded in the form of ontology. One of the promising directions in the area of ontology-based neural explanation is based on concept extraction-the process of establishing relationships between internal representations of the network and ontology concepts. Existing algorithms of concept extraction are search-based and require training multiple mapping networks for each concept, which may be time consuming. The paper proposes a method to build post-hoc ontology-based explanations by training a single multi-label concept extraction network, mapping activations of the specified "black box" network to the ontology concepts. The experiments with two public datasets show that the proposed method can generate accurate ontology-based explanations of a given network and requires significantly less time for concept extraction than existing algorithms.
引用
收藏
页码:433 / 446
页数:14
相关论文
共 16 条
  • [1] Agafonov Anton, 2022, SoICT 2022: The 11th International Symposium on Information and Communication Technology, P82, DOI 10.1145/3568562.3568602
  • [2] Agafonov Anton, 2022, 2022 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), P160, DOI 10.1109/SIBIRCON56155.2022.10016932
  • [3] Deep GONet: self-explainable deep neural network based on Gene Ontology for phenotype prediction from gene expression data
    Bourgeais, Victoria
    Zehraoui, Farida
    Ben Hamdoune, Mohamed
    Hanczar, Blaise
    [J]. BMC BIOINFORMATICS, 2021, 22 (SUPPL 10)
  • [4] Towards Ontologically Explainable Classifiers
    Bourguin, Gregory
    Lewandowski, Arnaud
    Bouneffa, Mourad
    Ahmad, Adeel
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 472 - 484
  • [5] Burkart N, 2021, J ARTIF INTELL RES, V70, P245
  • [6] Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
    Confalonieri, Roberto
    Weyde, Tillman
    Besold, Tarek R.
    Martin, Fermin Moscoso del Prado
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 296
  • [7] TREPAN Reloaded: A Knowledge-Driven Approach to Explaining Black-Box Models
    Confalonieri, Roberto
    Weyde, Tillman
    Besold, Tarek R.
    del Prado Martin, Fermin Moscoso
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2457 - 2464
  • [8] Confalonieri Roberto., 2019, An Ontology-Based Approach to Explaining Artificial Neural Networks
  • [9] A Framework for Explainable Deep Neural Models Using External Knowledge Graphs
    Daniels, Zachary A.
    Frank, Logan D.
    Menart, Christopher J.
    Raymer, Michael
    Hitzler, Pascal
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [10] de Sousa Ribeiro M., 2020, Explainable Abstract Trains Dataset