Towards a mathematical framework to inform neural network modelling via polynomial regression

被引:24
作者
Morala, Pablo [1 ]
Cifuentes, Jenny Alexandra [1 ]
Lillo, Rosa E. [1 ,2 ]
Ucar, Inaki [1 ]
机构
[1] Univ Carlos III Madrid, Santander Big Data Inst, Uc3m, Getafe, Madrid, Spain
[2] Univ Carlos III Madrid, Dept Stat, Getafe, Madrid, Spain
关键词
Polynomial regression; Neural networks; Machine learning; BLACK-BOX; APPROXIMATION;
D O I
10.1016/j.neunet.2021.04.036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Even when neural networks are widely used in a large number of applications, they are still considered as black boxes and present some difficulties for dimensioning or evaluating their prediction error. This has led to an increasing interest in the overlapping area between neural networks and more traditional statistical methods, which can help overcome those problems. In this article, a mathematical framework relating neural networks and polynomial regression is explored by building an explicit expression for the coefficients of a polynomial regression from the weights of a given neural network, using a Taylor expansion approach. This is achieved for single hidden layer neural networks in regression problems. The validity of the proposed method depends on different factors like the distribution of the synaptic potentials or the chosen activation function. The performance of this method is empirically tested via simulation of synthetic data generated from polynomials to train neural networks with different structures and hyperparameters, showing that almost identical predictions can be obtained when certain conditions are met. Lastly, when learning from polynomial generated data, the proposed method produces polynomials that approximate correctly the data locally. (C) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页码:57 / 72
页数:16
相关论文
共 38 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Agarwal R., 2020, ARXIV PREPRINT ARXIV
  • [3] Towards explainable deep neural networks (xDNN)
    Angelov, Plamen
    Soares, Eduardo
    [J]. NEURAL NETWORKS, 2020, 130 (130) : 185 - 194
  • [4] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [5] Bengio Yoshua, 2012, Neural Networks: Tricks of the Trade. Second Edition: LNCS 7700, P437, DOI 10.1007/978-3-642-35289-8_26
  • [6] Are artificial neural networks black boxes?
    Benitez, JM
    Castro, JL
    Requena, I
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (05): : 1156 - 1164
  • [7] Cheng X., 2019, ARXIV PREPRINT ARXIV
  • [8] Günther F, 2010, R J, V2, P30
  • [9] A Survey of Methods for Explaining Black Box Models
    Guidotti, Riccardo
    Monreale, Anna
    Ruggieri, Salvatore
    Turin, Franco
    Giannotti, Fosca
    Pedreschi, Dino
    [J]. ACM COMPUTING SURVEYS, 2019, 51 (05)
  • [10] BACK-PROPAGATION ALGORITHM WHICH VARIES THE NUMBER OF HIDDEN UNITS
    HIROSE, Y
    YAMASHITA, K
    HIJIYA, S
    [J]. NEURAL NETWORKS, 1991, 4 (01) : 61 - 66