Elastic exponential linear units for convolutional neural networks

被引:40
作者
Kim, Daeho [1 ]
Kim, Jinah [2 ]
Kim, Jaeil [1 ,3 ]
机构
[1] Kyungpook Natl Univ, Dept Artificial Intelligence, 80 Daegu Ro, Daegu, South Korea
[2] Korea Inst Ocean Sci & Technol, Marine Disaster Res Ctr, 385 Haeyang Ro, Busan, South Korea
[3] Kyungpook Natl Univ, Sch Comp Sci & Engn, 80 Daehak Ro, Daegu, South Korea
关键词
NOISE;
D O I
10.1016/j.neucom.2020.03.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activation functions play important roles in determining the depth and non-linearity of deep learning models. Since the Rectified Linear Unit (ReLU) was introduced, many modifications, in which noise is intentionally injected, have been proposed to avoid overfitting. Exponential Linear Unit (ELU) and their variants, with trainable parameters, have been proposed to reduce the bias shift effects which are often observed in ReLU-type activation functions. In this paper, we propose a novel activation function, called the Elastic Exponential Linear Unit (EELU), which combines the advantages of both types of activation functions in a generalized form. EELU has an elastic slope in the positive part, and preserves the negative signal by using a small non-zero gradient. We also present a new strategy to insert neuronal noise using a Gaussian distribution in the activation function to improve generalization. We demonstrated how EELU can represent a wider variety of features with random noise than other activation functions, by visualizing the latent features of convolutional neural networks. We evaluated the effectiveness of the EELU approach through extensive experiments with image classification using the CIFAR-10/CIFAR-100, ImageNet, and Tiny ImageNet datasets. Our experimental results show that EELU achieved better generalization performance and improved classification accuracy over conventional activation functions, such as ReLU, ELU, ReLU- and ELU-like variants, Scaled ELU, and Swish. EELU produced performance improvements in image classification using a smaller number of training samples, owing to its noise injection strategy, which allows significant variation in function outputs, including deactivation. © 2020 The Author(s)
引用
收藏
页码:253 / 266
页数:14
相关论文
共 46 条
[1]  
Agostinelli F., 2014, arXiv preprint arXiv:1412.6830
[2]   The contribution of noise to contrast invariance of orientation tuning in cat visual cortex [J].
Anderson, JS ;
Lampl, I ;
Gillespie, DC ;
Ferster, D .
SCIENCE, 2000, 290 (5498) :1968-1972
[3]  
[Anonymous], 2016, P 30 AAAI C ART INT
[4]  
[Anonymous], 2017, 2017 IEEE C COMPUTER, DOI DOI 10.1109/CVPR.2017.243
[5]   TRAINING WITH NOISE IS EQUIVALENT TO TIKHONOV REGULARIZATION [J].
BISHOP, CM .
NEURAL COMPUTATION, 1995, 7 (01) :108-116
[6]   Randomly translational activation inspired by the input distributions of ReLU [J].
Cao, Jiale ;
Pang, Yanwei ;
Li, Xuelong ;
Liang, Jingkun .
NEUROCOMPUTING, 2018, 275 :859-868
[7]   Pedestrian Detection Inspired by Appearance Constancy and Shape Symmetry [J].
Cao, Jiale ;
Pang, Yanwei ;
Li, Xuelong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (12) :5538-5551
[8]  
Clevert Djork-Arne, 2016, ICLR
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Dey R, 2018, I S BIOMED IMAGING, P774, DOI 10.1109/ISBI.2018.8363687