Selective Dropout for Deep Neural Networks

被引:5
|
作者
Barrow, Erik [1 ]
Eastwood, Mark [1 ]
Jayne, Chrisina [2 ]
机构
[1] Coventry Univ, Coventry, W Midlands, England
[2] Robert Gordon Univ, Aberdeen, Scotland
来源
NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III | 2016年 / 9949卷
关键词
MNIST; Artificial neural network; Deep learning; Dropout network; Non-random dropout; Selective dropout;
D O I
10.1007/978-3-319-46675-0_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dropout has been proven to be an effective method for reducing overfitting in deep artificial neural networks. We present 3 new alternative methods for performing dropout on a deep neural network which improves the effectiveness of the dropout method over the same training period. These methods select neurons to be dropped through statistical values calculated using a neurons change in weight, the average size of a neuron's weights, and the output variance of a neuron. We found that increasing the probability of dropping neurons with smaller values of these statistics and decreasing the probability of those with larger statistics gave an improved result in training over 10,000 epochs. The most effective of these was found to be the Output Variance method, giving an average improvement of 1.17% accuracy over traditional dropout methods.
引用
收藏
页码:519 / 528
页数:10
相关论文
共 50 条
  • [1] Deep Dropout Artificial Neural Networks for Recognising Digits and Characters in Natural Images
    Barrow, Erik
    Jayne, Chrisina
    Eastwood, Mark
    NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, 2015, 9492 : 29 - 37
  • [2] Dropout Rademacher complexity of deep neural networks
    Wei Gao
    Zhi-Hua Zhou
    Science China Information Sciences, 2016, 59
  • [3] Dropout Rademacher complexity of deep neural networks
    Gao, Wei
    Zhou, Zhi-Hua
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (07)
  • [4] Regularization of deep neural networks with spectral dropout
    Khan, Salman H.
    Hayat, Munawar
    Porikli, Fatih
    NEURAL NETWORKS, 2019, 110 : 82 - 90
  • [5] Dropout Rademacher complexity of deep neural networks
    Wei GAO
    Zhi-Hua ZHOU
    Science China(Information Sciences), 2016, 59 (07) : 173 - 184
  • [6] Deep Learning Convolutional Neural Networks with Dropout - a Parallel Approach
    Shen, Jingyi
    Shafiq, M. Omair
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 572 - 577
  • [7] IMPROVING DEEP NEURAL NETWORKS BY USING SPARSE DROPOUT STRATEGY
    Zheng, Hao
    Chen, Mingming
    Liu, Wenju
    Yang, Zhanlei
    Liang, Shan
    2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP), 2014, : 21 - 26
  • [8] IMPROVING DEEP NEURAL NETWORKS FOR LVCSR USING RECTIFIED LINEAR UNITS AND DROPOUT
    Dahl, George E.
    Sainath, Tara N.
    Hinton, Geoffrey E.
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8609 - 8613
  • [9] Normalization and dropout for stochastic computing-based deep convolutional neural networks
    Li, Ji
    Yuan, Zihao
    Li, Zhe
    Ren, Ao
    Ding, Caiwen
    Draper, Jeffrey
    Nazarian, Shahin
    Qiu, Qinru
    Yuan, Bo
    Wang, Yanzhi
    INTEGRATION-THE VLSI JOURNAL, 2019, 65 : 395 - 403
  • [10] Batch Normalization and Dropout Regularization in Training Deep Neural Networks with Label Noise
    Rusiecki, Andrzej
    INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021, 2022, 418 : 57 - 66