Automating the design and development of gradient descent trained expert system networks

被引:4
作者
Straub, Jeremy [1 ]
机构
[1] North Dakota State Univ, Dept Comp Sci, 1320 Albrecht Blvd,Room 258, Fargo, ND 58108 USA
关键词
Expert systems; Gradient descent; Network design; Automation; Defensible artificial intelligence; Machine learning; Training; EXPLAINABLE ARTIFICIAL-INTELLIGENCE; NEURAL-NETWORK; GENETIC ALGORITHM; BACKPROPAGATION; OPTIMIZATION; SEARCH;
D O I
10.1016/j.knosys.2022.109465
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prior work introduced a gradient descent trained expert system that conceptually combines the learning capabilities of neural networks with the understandability and defensible logic of an expert system. This system was shown to be able to learn patterns from data and to perform decision-making at levels rivaling those reported by neural network systems. The principal limitation of the approach, though, was the necessity for the manual development of a rule-fact network (which is then trained using backpropagation). This paper proposes a technique for overcoming this significant limitation, as compared to neural networks. Specifically, this paper proposes the use of larger and denser-than -application need rule-fact networks which are trained, pruned, manually reviewed and then re-trained for use. Multiple types of networks are evaluated under multiple operating conditions and these results are presented and assessed. Based on these individual experimental condition assessments, the proposed technique is evaluated. The data presented shows that error rates as low as 3.9% (mean, 1.2% median) can be obtained, demonstrating the efficacy of this technique for many applications. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:18
相关论文
共 112 条
  • [1] Speeding up backpropagation using multiobjective evolutionary algorithms
    Abbass, HA
    [J]. NEURAL COMPUTATION, 2003, 15 (11) : 2705 - 2726
  • [2] Abu-Nasser B.S, 2017, INT J ENG INFORM SYS, V1, P218
  • [3] Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients
    Abuqaddom, Inas
    Mahafzah, Basel A.
    Faris, Hossam
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 230
  • [4] Aicher C., 2020, PR MACH LEARN RES, V115, P799
  • [5] Alonso JM, 2019, ATL STUD UNCER MODEL, V1, P134
  • [6] [Anonymous], 1989, Complex Systems
  • [7] In AI we trust? Perceptions about automated decision-making by artificial intelligence
    Araujo, Theo
    Helberger, Natali
    Kruikemeier, Sanne
    de Vreese, Claes H.
    [J]. AI & SOCIETY, 2020, 35 (03) : 611 - 623
  • [8] Expert system for medicine diagnosis using software agents
    Arsene, Octavian
    Dumitrache, Loan
    Mihu, Ioana
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (04) : 1825 - 1834
  • [9] Pruning algorithms of neural networks - a comparative study
    Augasta, M. Gethsiyal
    Kathirvalavakumar, T.
    [J]. OPEN COMPUTER SCIENCE, 2013, 3 (03) : 105 - 115
  • [10] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115