Selective Dropout for Deep Neural Networks

被引:5
|
作者
Barrow, Erik [1 ]
Eastwood, Mark [1 ]
Jayne, Chrisina [2 ]
机构
[1] Coventry Univ, Coventry, W Midlands, England
[2] Robert Gordon Univ, Aberdeen, Scotland
来源
NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III | 2016年 / 9949卷
关键词
MNIST; Artificial neural network; Deep learning; Dropout network; Non-random dropout; Selective dropout;
D O I
10.1007/978-3-319-46675-0_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dropout has been proven to be an effective method for reducing overfitting in deep artificial neural networks. We present 3 new alternative methods for performing dropout on a deep neural network which improves the effectiveness of the dropout method over the same training period. These methods select neurons to be dropped through statistical values calculated using a neurons change in weight, the average size of a neuron's weights, and the output variance of a neuron. We found that increasing the probability of dropping neurons with smaller values of these statistics and decreasing the probability of those with larger statistics gave an improved result in training over 10,000 epochs. The most effective of these was found to be the Output Variance method, giving an average improvement of 1.17% accuracy over traditional dropout methods.
引用
收藏
页码:519 / 528
页数:10
相关论文
共 50 条
  • [21] Anomaly Detection Approach Based on Deep Neural Network and Dropout
    Hussien, Zaid Khalaf
    Dhannoon, Ban N.
    BAGHDAD SCIENCE JOURNAL, 2020, 17 (02) : 701 - 709
  • [22] Shift Quality Classifier Using Deep Neural Networks on Small Data with Dropout and Semi-Supervised Learning
    Kawakami, Takefumi
    Ide, Takanori
    Hoki, Kunihito
    Muramatsu, Masakazu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (12) : 2078 - 2084
  • [23] FLOW OF RENYI INFORMATION IN DEEP NEURAL NETWORKS
    Huang, Che-Wei
    Narayanan, Shrikanth S.
    2016 IEEE 26TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2016,
  • [24] Learning Deep Networks from Noisy Labels with Dropout Regularization
    Jindal, Ishan
    Nokleby, Matthew
    Chen, Xuewen
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 967 - 972
  • [25] Max-Pooling Dropout for Regularization of Convolutional Neural Networks
    Wu, Haibing
    Gu, Xiaodong
    NEURAL INFORMATION PROCESSING, PT I, 2015, 9489 : 46 - 54
  • [26] Data Dropout: Optimizing Training Data for Convolutional Neural Networks
    Wang, Tianyang
    Huan, Jun
    Li, Bo
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 39 - 46
  • [27] Dropout: A Simple Way to Prevent Neural Networks from Overfitting
    Srivastava, Nitish
    Hinton, Geoffrey
    Krizhevsky, Alex
    Sutskever, Ilya
    Salakhutdinov, Ruslan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 1929 - 1958
  • [28] The Dropout Method of Face Recognition Using a Deep Convolution Neural Network
    Yi Dian
    Shi Xiaohong
    Xu Hao
    2018 INTERNATIONAL SYMPOSIUM IN SENSING AND INSTRUMENTATION IN IOT ERA (ISSI), 2018,
  • [29] Ranking with Deep Neural Networks
    Prakash, Chandan
    Sarkar, Amitrajit
    PROCEEDINGS OF 2018 FIFTH INTERNATIONAL CONFERENCE ON EMERGING APPLICATIONS OF INFORMATION TECHNOLOGY (EAIT), 2018,
  • [30] On the Singularity in Deep Neural Networks
    Nitta, Tohru
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT IV, 2016, 9950 : 389 - 396