Approximate Fisher Information Matrix to Characterize the Training of Deep Neural Networks

被引:12
作者
Liao, Zhibin [1 ]
Drummond, Tom [2 ]
Reid, Ian [1 ]
Carneiro, Gustavo [1 ]
机构
[1] Univ Adelaide, Australian Ctr Robot Vis, Adelaide, SA 5005, Australia
[2] Monash Univ, Australian Ctr Robot Vis, Clayton, Vic 3800, Australia
基金
澳大利亚研究理事会;
关键词
Training; Machine learning; Neural networks; Computational modeling; Convergence; Linear programming; Testing; deep learning; neural networks; stochastic gradient descent; Fisher information matrix; neural network training characterisation; OPTIMIZATION METHODS;
D O I
10.1109/TPAMI.2018.2876413
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we introduce a novel methodology for characterizing the performance of deep learning networks (ResNets and DenseNet) with respect to training convergence and generalization as a function of mini-batch size and learning rate for image classification. This methodology is based on novel measurements derived from the eigenvalues of the approximate Fisher information matrix, which can be efficiently computed even for high capacity deep models. Our proposed measurements can help practitioners to monitor and control the training process (by actively tuning the mini-batch size and learning rate) to allow for good training convergence and generalization. Furthermore, the proposed measurements also allow us to show that it is possible to optimize the training process with a new dynamic sampling training approach that continuously and automatically change the mini-batch size and learning rate during the training process. Finally, we show that the proposed dynamic sampling training approach has a faster training time and a competitive classification accuracy compared to the current state of the art.
引用
收藏
页码:15 / 26
页数:12
相关论文
共 44 条
  • [1] Natural gradient works efficiently in learning
    Amari, S
    [J]. NEURAL COMPUTATION, 1998, 10 (02) : 251 - 276
  • [2] [Anonymous], ARXIV161102525
  • [3] [Anonymous], P INT C ART INT STAT
  • [4] [Anonymous], ARXIV170800555
  • [5] [Anonymous], 2017, ARXIV171011258
  • [6] [Anonymous], P 2 WORKSH BAYES DEE
  • [7] [Anonymous], P NIPS WORKSH DEEP L
  • [8] [Anonymous], 2010, ICML
  • [9] [Anonymous], 2018, P INT C LEARN REPR
  • [10] [Anonymous], 2016, ARXIV160508361V2