Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension

被引:3
作者
Marcondes, Francisco S. [1 ]
Duraes, Dalila [1 ]
Santos, Flavio [1 ]
Almeida, Jose Joao [1 ]
Novais, Paulo [1 ]
机构
[1] Univ Minho, ALGORITMI Ctr, P-4710057 Braga, Portugal
关键词
paraconsistent logic; explainable AI; neural network;
D O I
10.3390/electronics10212660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Can eXplainable AI Offer a New Perspective for Groundwater Recharge Estimation?-Global-Scale Modeling Using Neural Network
    Jung, Hyekyeng
    Saynisch-Wagner, Jan
    Schulz, Stephan
    [J]. WATER RESOURCES RESEARCH, 2024, 60 (04)
  • [42] Eddy Current Crack Extension Direction Evaluation based on Neural Network
    Peng, Xu
    [J]. 2012 IEEE SENSORS PROCEEDINGS, 2012, : 1770 - 1773
  • [43] Enhancing Cluster Analysis With Explainable AI and Multidimensional Cluster Prototypes
    Bobek, Szymon
    Kuk, Michal
    Szelazek, Maciej
    Nalepa, Grzegorz J.
    [J]. IEEE ACCESS, 2022, 10 : 101556 - 101574
  • [44] Graph theory based mathematical methods for improved explainable AI
    Vashist, Ansh
    Joshi, Aditya
    Tiwari, Sanjay
    Sharma, Rakesh
    Jain, Tarun
    Dadheech, Pankaj
    [J]. JOURNAL OF INTERDISCIPLINARY MATHEMATICS, 2025, 28 (02) : 627 - 636
  • [45] Explainable neural network for time series-based condition monitoring in sheet metal shearing
    Becker, Marco
    Niemietz, Philipp
    Bergs, Thomas
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2025,
  • [46] An Explainable AI-Based Fault Diagnosis Model for Bearings
    Hasan, Md Junayed
    Sohaib, Muhammad
    Kim, Jong-Myon
    [J]. SENSORS, 2021, 21 (12)
  • [47] Knowledge-graph-based explainable AI: A systematic review
    Rajabi, Enayat
    Etminani, Kobra
    [J]. JOURNAL OF INFORMATION SCIENCE, 2024, 50 (04) : 1019 - 1029
  • [48] Explainable AI-based Intrusion Detection in the Internet of Things
    Siganos, Marios
    Radoglou-Grammatikis, Panagiotis
    Kotsiuba, Igor
    Markakis, Evangelos
    Moscholios, Ioannis
    Goudos, Sotirios
    Sarigiannidis, Panagiotis
    [J]. 18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [49] Advancing 6G: Survey for Explainable AI on Communications and Network Slicing
    Sun, Haochen
    Liu, Yifan
    Al-Tahmeesschi, Ahmed
    Nag, Avishek
    Soleimanpour, Mohadeseh
    Canberk, Berk
    Arslan, Huseyin
    Ahmadi, Hamed
    [J]. IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 1372 - 1412
  • [50] Perturbation-Based Explainable AI for ECG Sensor Data
    Paralic, Jan
    Kolarik, Michal
    Paralicova, Zuzana
    Lohaj, Oliver
    Jozefik, Adam
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (03):