ANALYSIS AND SYNTHESIS OF FEEDFORWARD NEURAL NETWORKS USING DISCRETE AFFINE WAVELET TRANSFORMATIONS

被引:304
作者
PATI, YC
KRISHNAPRASAD, PS
机构
[1] USN,RES LABS,NANOELECTR PROC FACIL,WASHINGTON,DC 20016
[2] UNIV MARYLAND,SYST RES CTR,COLL PK,MD 20742
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1993年 / 4卷 / 01期
基金
美国国家科学基金会;
关键词
D O I
10.1109/72.182697
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we develop a representation of a class of feedforward neural networks in terms of discrete affine wavelet transforms. It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings. This result follows simply from the observation that standard feedforward network architectures possess an inherent translation-dilation structure and every node implements the same activation function. It is shown that the wavelet transform formalism provides a mathematical framework within which it is possible to perform both analysis and synthesis of feedforward networks. For the purpose of analysis, the wavelet formulation characterizes a class (L2)of mappings which can be implemented by feedforward networks as well as reveals an exact implementation of a given mapping in this class. Spatio-spectral localization properties of wavelets can be exploited in synthesizing a feedforward network to perform a given approximation task. Synthesis procedures based on spatio-spectral localization result in reducing the training problem to one of convex optimization. We outline two such synthesis schemes.
引用
收藏
页码:73 / 85
页数:13
相关论文
共 24 条