Neural network explanation using inversion

被引:60
作者
Saad, Emad W.
Wunsch, Donald C., II
机构
[1] Boeing Co, Phantom Works, Seattle, WA 98124 USA
[2] Univ Missouri, Dept Elect & Comp Engn, Rolla, MO 65409 USA
关键词
rule extraction; neural network explanation; explanation capability of neural networks; inversion; hyperplanes; evolutionary algorithm; pedagogical;
D O I
10.1016/j.neunet.2006.07.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV,(1) a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems. (c) 2006 Elsevier Ltd. All rights reserved.
引用
收藏
页码:78 / 93
页数:16
相关论文
共 76 条
[1]   Survey and critique of techniques for extracting rules from trained artificial neural networks [J].
Andrews, R ;
Diederich, J ;
Tickle, AB .
KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) :373-389
[2]  
ANDREWS R, 1995, FRONT ARTIF INTEL AP, V27, P1
[3]  
[Anonymous], P 12 INT C EXP SYS T
[4]  
[Anonymous], 2000, HYBRID NEURAL SYSTEM
[5]  
[Anonymous], 2001, NEURAL NETWORKS COMP
[6]  
[Anonymous], 1994, DEDEC DECISION DETEC
[7]  
[Anonymous], 1996, EXPLANATION BASED NE
[8]  
BABA K, 1992, P INT JOINT C NEUR N, P579
[9]   Are artificial neural networks black boxes? [J].
Benitez, JM ;
Castro, JL ;
Requena, I .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (05) :1156-1164
[10]  
BERENJI HR, 1991, MACHINE LEARNING, P475