The generalized sigmoid activation function: Competitive supervised learning

被引:130
作者
Narayan, S
机构
[1] Department of Mathematical Sciences, Univ. of North Carolina-Wilmington, Wilmington
关键词
D O I
10.1016/S0020-0255(96)00200-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a sigmoidal function, and frequently all neurons in an MLP network employ the same activation function. In this paper, we introduce the notion of the generalized sigmoid as an activation function for neurons in the output layer of an MLP network. The enhancements afforded by the use of the generalized sigmoid are analyzed and demonstrated in the context of some well-known classification problems. (C) Elsevier Science Inc. 1997.
引用
收藏
页码:69 / 82
页数:14
相关论文
共 10 条
[1]  
BAKSHI BR, 1992, JUN P INT JOINT C NE, V2, P140
[2]  
CHIANG C, 1992, P IJCNN, V3, P887
[3]  
CICHOCKI A, 1993, NEURAL NETWORKS OPTI, P483
[4]  
Gallant S. I., 1993, NEURAL NETWORK LEARN, P211
[5]  
HUSH DR, 1993, IEEE SIGNAL PROC JAN, P8
[6]  
Lapedes A., 1987, ALAMOS NATL LAB REPO, DOI DOI 10.1109/NNSP.1991.239502
[7]  
MURPHY P, 1994, E COMMUNICATION
[8]  
ROSENBLATT F, 1961, PRINCIPLES NEURODYNA, P465
[9]  
Rumelhart D.E., 1987, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, P318
[10]  
Werbos P. J., 1974, THESIS