Deep Neural Networks with Cascaded Output Layers

被引:0
|
作者
Cui H. [1 ]
Bai J. [1 ]
Bi X. [1 ]
Huang L. [1 ]
机构
[1] School of Automotive Studies, Tongji University, Shanghai
来源
Bai, Jie (baijie@tongji.edu.cn) | 1600年 / Science Press卷 / 45期
关键词
AdaBoost; Deep neural network; Generalization performance;
D O I
10.11908/j.issn.0253-374x.2017.s1.004
中图分类号
学科分类号
摘要
This paper presented a novel design method for deep neural network aiming at promoting its generalization performance. First, the common methods for improving its generalization performance were introduced. Then, the deep neural networks with cascaded output layers was described. This deep neural networks have several output layers that constitute sequential classifiers, which combined with the adaboost algorithm, boost the generalization performance. Simultaneously, the proposed design method saves the computation consumption by sharing partial network structure among these classifiers. Finally, experiments were conducted to confirm the effectiveness and reliability of the method. © 2017, Editorial Department of Journal of Tongji University. All right reserved.
引用
收藏
页码:19 / 23
页数:4
相关论文
共 9 条
  • [1] Srivastava N., Geoffrey H., Krizhevsky A., Et al., Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, 15, 1, (2014)
  • [2] Freund Y., Schapire R.E., Experiments with a new boosting algorithm, Proceedings of the 1996 International Conference on Machine Learning, pp. 148-156, (1996)
  • [3] Viola P., Jones M., Rapid object detection using a boosted cascade of simple features, Proceedings of the 2001 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1511-1518, (2001)
  • [4] Erhan D., Bengio Y., Courville A., Et al., Why does unsupervised pre-training help deep learning?, Journal of Machine Learning Research, 11, 1, (2010)
  • [5] Krizhevsky A., Sutskever I., Hinton G., Imagenet classification with deep convolutional neural networks, Journal of Communications of the ACM, 60, 6, (2017)
  • [6] Szegedy C., Liu W., Jia Y.Q., Et al., Going deeper with convolutions, Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, (2015)
  • [7] Kingma D.P., Welling M., Auto-encoding variational bayes, Proceedings of the 2013 International Conference on Learning Representations, pp. 1312-1322, (2013)
  • [8] Deng J., Berg A., Li K., Et al., What does classifying more than 10, 000 image categories tell us?, Proceedings of the 2010 European Conference on Computer Vision, pp. 71-84, (2010)
  • [9] Sanchez J., Perronnin F., High-dimensional signature compression for large-scale image classification, Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1665-1672, (2011)