Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning

被引:0
作者
David M. Rodríguez
Manuel P. Cuéllar
Diego P. Morales
机构
[1] University of Granada,
[2] HAT.tec Lilienthalstraße 15,undefined
来源
Applied Intelligence | 2024年 / 54卷
关键词
Soft decision trees; Concepts; XAI; Neural symbolic; Image classification; Human-in-the-loop;
D O I
暂无
中图分类号
学科分类号
摘要
Interpretable deep learning models are increasingly important in domains where transparent decision-making is required. In this field, the interaction of the user with the model can contribute to the interpretability of the model. In this research work, we present an innovative approach that combines soft decision trees, neural symbolic learning, and concept learning to create an image classification model that enhances interpretability and user interaction, control, and intervention. The key novelty of our method relies on the fusion of an interpretable architecture with neural symbolic learning, allowing the incorporation of expert knowledge and user interaction. Furthermore, our solution facilitates the inspection of the model through queries in the form of first-order logic predicates. Our main contribution is a human-in-the-loop model as a result of the fusion of neural symbolic learning and an interpretable architecture. We validate the effectiveness of our approach through comprehensive experimental results, demonstrating competitive performance on challenging datasets when compared to state-of-the-art solutions.
引用
收藏
页码:3667 / 3679
页数:12
相关论文
共 42 条
  • [1] Arrieta AB(2020)Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI Inf Fusion 58 82-115
  • [2] Díaz-Rodríguez N(2022)Logic tensor networks Artif Intell 303 103649-83
  • [3] Del Ser J(2022)Greybox xai: a neural-symbolic learning framework to produce interpretable predictions for image classification Knowl-Based Syst 258 109947-583
  • [4] Bennetot A(2022)Explainable neural-symbolic learning (x-nesyl) methodology to fuse deep learning representations with expert knowledge graphs: the monumai cultural heritage use case Inf Fusion 79 58-280
  • [5] Tabik S(2021)Mace: model agnostic concept extractor for explaining image classification networks IEEE Trans Artif Intell 2 574-38
  • [6] Barbado A(2021)Monumai: dataset, deep learning pipeline and citizen science based app for monumental heritage taxonomy and classification Neurocomputing 420 266-undefined
  • [7] García S(2019)Explanation in artificial intelligence: insights from the social sciences Artif Intell 267 1-undefined
  • [8] Gil-López S(undefined)undefined undefined undefined undefined-undefined
  • [9] Molina D(undefined)undefined undefined undefined undefined-undefined
  • [10] Benjamins R(undefined)undefined undefined undefined undefined-undefined