High efficient activation function design for CNN model image classification task

被引:0
|
作者
Du S. [1 ]
Jia X. [1 ]
Huang Y. [1 ]
Guo Y. [1 ]
Zhao B. [1 ]
机构
[1] School of Electrical and Information Engineering, Anhui University of Science and Technolog, Huainan
来源
Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering | 2022年 / 51卷 / 03期
关键词
Convolutional neural network; High efficiency activation function; Image classification; Neurons ''necrosis'';
D O I
10.3788/IRLA20210253
中图分类号
学科分类号
摘要
Activation Functions (AF) play a very important role in learning and fitting complex function models of convolutional neural networks. In order to enable neural networks to complete various learning tasks better and faster, a new efficient activation function EReLU was designed in this paper. By introducing the natural logarithm function, EReLU effectively alleviated the problems of neuronal "necrosis" and gradient dispersion. Through the analysis of the activation function and its derivative function in the feedforward and feedback process of the mathematical model of the EReLU function exploration and design, the specific design of the EReLU function was determined through test, and finally the effect of improving the accuracy and accelerating training was achieved; Subsequently, EReLU was tested on different networks and data sets, and the results show that compared with ReLU and its improved function, the accuracy of EReLU is improved by 0.12%-6.61%, and the training efficiency is improved by 1.02%-6.52%, which strongly proved the superiority of EReLU function in accelerating training and improving accuracy. Copyright ©2022 Infrared and Laser Engineering. All rights reserved.
引用
收藏
相关论文
共 14 条
  • [1] Hassell M P, Lawton J H, Beddington J R., Sigmoid functional responses by invertebrate predators and parasitoids [J], The Journal of Animal Ecology, pp. 249-262, (1977)
  • [2] Kalman B L, Kwasny S C., Why tanh: Choosing a sigmoidal function, Proceedings 1992IJCNN International Joint Conference on Neural Networks, 4, pp. 578-581, (1992)
  • [3] Krizhevsky A, Sutskever I, Hinton G E., Imagenet classification with deep convolutional neural networks, Communications of the ACM, 60, 6, pp. 84-90, (2017)
  • [4] Dubey A K, Jain V., Comparative study of convolution neural network's relu and leaky-relu activation functions, Applications of Computing, Automation and Wireless Systems in Electrical Engineering, pp. 873-880, (2019)
  • [5] He K, Zhang X, Ren S, Et al., Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034, (2015)
  • [6] Clevert D A, Unterthiner T, Hochreiter S., Fast and accurate deep network learning by exponential linear units (elus) [J], (2015)
  • [7] Shi Qi, Research and verification of image classification optimization algorithm based on convolutional neural network[D], (2017)
  • [8] Wang Hongxia, Zhou Jiaqi, Gu Chenghao, Et al., Design of activation functions in convolutional neural networks for image classification, Journal of Zhejiang University (Engineering Science), 53, 7, pp. 1363-1373, (2019)
  • [9] Zhang X, Zou Y, Shi W., Dilated convolution neural network with LeakyReLU for environmental sound classification, 2017 22nd International Conference on Digital Signal Processing (DSP), pp. 1-5, (2017)
  • [10] Xu L, Choy C, Li Y W., Deep sparse rectifier neural networks for speech denoising, 2016 IEEE International Workshop on Acoustic Signal Enhancement (IWAENC), pp. 1-5, (2016)