Knowledge-graph-based explainable AI: A systematic review

被引:30
作者
Rajabi, Enayat [1 ]
Etminani, Kobra [2 ]
机构
[1] Cape Breton Univ, Shannon Sch Business, 1250 Grand Lake Rd, Sydney, NS B1P 6L2, Canada
[2] Halmstad Univ, Ctr Appl Intelligent Syst Res CAISR, Halmstad, Sweden
基金
加拿大自然科学与工程研究理事会;
关键词
Knowledge graph; artificial intelligence; systematic review; explainable AI; AL;
D O I
10.1177/01655515221112844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, knowledge graphs (KGs) have been widely applied in various domains for different purposes. The semantic model of KGs can represent knowledge through a hierarchical structure based on classes of entities, their properties, and their relationships. The construction of large KGs can enable the integration of heterogeneous information sources and help Artificial Intelligence (AI) systems be more explainable and interpretable. This systematic review examines a selection of recent publications to understand how KGs are currently being used in eXplainable AI systems. To achieve this goal, we design a framework and divide the use of KGs into four categories: extracting features, extracting relationships, constructing KGs, and KG reasoning. We also identify where KGs are mostly used in eXplainable AI systems (pre-model, in-model, and post-model) according to the aforementioned categories. Based on our analysis, KGs have been mainly used in pre-model XAI for feature and relation extraction. They were also utilised for inference and reasoning in post-model XAI. We found several studies that leveraged KGs to explain the XAI models in the healthcare domain.
引用
收藏
页码:1019 / 1029
页数:11
相关论文
共 58 条
  • [1] On the role of words in the network structure of texts: Application to authorship attribution
    Akimushkin, Camilo
    Amancio, Diego R.
    Oliveira, Osvaldo N., Jr.
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2018, 495 : 49 - 58
  • [2] How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs
    Anelli, Vito Walter
    Di Noia, Tommaso
    Di Sciascio, Eugenio
    Ragone, Azzurra
    Trotta, Joseph
    [J]. SEMANTIC WEB - ISWC 2019, PT I, 2019, 11778 : 38 - 56
  • [3] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [4] Benchekroun O, 2020, ARXIV
  • [5] Knowledge Graph Semantic Enhancement of Input Data for Improving AI
    Bhatt, Shreyansh
    Sheth, Amit
    Shalin, Valerie
    Zhao, Jinjin
    [J]. IEEE INTERNET COMPUTING, 2020, 24 (02) : 66 - 72
  • [6] Explainable Machine Learning in Deployment
    Bhatt, Umang
    Xiang, Alice
    Sharma, Shubham
    Weller, Adrian
    Taly, Ankur
    Jia, Yunhan
    Ghosh, Joydeep
    Puri, Ruchir
    Moura, Jose M. F.
    Eckersley, Peter
    [J]. FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 648 - 657
  • [7] Bresso E., 2020, ARXIV
  • [8] Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining
    Bresso, Emmanuel
    Monnin, Pierre
    Bousquet, Cedric
    Calvier, Francois-Elie
    Ndiaye, Ndeye-Coumba
    Petitpain, Nadine
    Smail-Tabbone, Malika
    Coulet, Adrien
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (01)
  • [9] Burkart N, 2021, J ARTIF INTELL RES, V70, P245
  • [10] Chatterjee J., 2020, ARXIV