Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension

被引:3
|
作者
Marcondes, Francisco S. [1 ]
Duraes, Dalila [1 ]
Santos, Flavio [1 ]
Almeida, Jose Joao [1 ]
Novais, Paulo [1 ]
机构
[1] Univ Minho, ALGORITMI Ctr, P-4710057 Braga, Portugal
关键词
paraconsistent logic; explainable AI; neural network;
D O I
10.3390/electronics10212660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.
引用
收藏
页数:12
相关论文
共 50 条
  • [11] Utilizing Explainable AI for improving the Performance of Neural Networks
    Sun, Huawei
    Servadei, Lorenzo
    Feng, Hao
    Stephan, Michael
    Santra, Avik
    Wille, Robert
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1775 - 1782
  • [12] Defining Explainable AI for Requirements Analysis
    Sheh, Raymond
    Monteath, Isaac
    KUNSTLICHE INTELLIGENZ, 2018, 32 (04): : 261 - 266
  • [13] Exploring explainable AI: a bibliometric analysis
    Sharma, Chetan
    Sharma, Shamneesh
    Sharma, Komal
    Sethi, Ganesh Kumar
    Chen, Hsin-Yuan
    DISCOVER APPLIED SCIENCES, 2024, 6 (11)
  • [14] Real-time Neural Network Inference on Extremely Weak Devices: Agile Offloading with Explainable AI
    Huang, Kai
    Gao, Wei
    PROCEEDINGS OF THE 2022 THE 28TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2022, 2022, : 200 - 213
  • [15] Interpretable Retinal Disease Classification from OCT Images Using Deep Neural Network and Explainable AI
    Reza, Md Tanzim
    Ahmed, Farzad
    Sharar, Shihab
    Rasel, Annajiat Alim
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ELECTRONICS, COMMUNICATIONS AND INFORMATION TECHNOLOGY 2021 (ICECIT 2021), 2021,
  • [16] Rotary Inverted Pendulum Identification for Control by Paraconsistent Neural Network
    de Carvalho, Arnaldo, Jr.
    Justo, Joao Francisco
    Angelico, Bruno Augusto
    de Oliveira, Alexandre Manicoba
    da Silva Filho, Joao Inacio
    IEEE ACCESS, 2021, 9 : 74155 - 74167
  • [17] Explainable Neural Network Recognition of Handwritten Characters
    Whitten, Paul
    Wolff, Francis
    Papachristou, Chris
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 176 - 182
  • [18] Explainable AI Based Neck Direction Prediction and Analysis During Head Impacts
    Shridevi, S.
    Elias, Susan
    IEEE ACCESS, 2024, 12 : 31399 - 31408
  • [19] Enhancing Gamma-Ray Burst Detection: Evaluation of Neural Network Background Estimator and Explainable AI Insights
    Crupi, Riccardo
    Dilillo, Giuseppe
    Della Casa, Giovanni
    Fiore, Fabrizio
    Vacchi, Andrea
    GALAXIES, 2024, 12 (02):
  • [20] Explainable AI to Analyze Outcomes of Spike Neural Network in Covid-19 Chest X-rays
    Kamal, Md Sarwar
    Chowdhury, Linkon
    Dey, Nilanjan
    Fong, Simon James
    Santosh, K. C.
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 3408 - 3415