Studying the plasticity in deep convolutional neural networks using random pruning

被引:22
作者
Mittal, Deepak [1 ]
Bhardwaj, Shweta [1 ]
Khapra, Mitesh M. [1 ]
Ravindran, Balaraman [1 ]
机构
[1] Indian Inst Technol Madras, Robert Bosch Ctr Data Sci & AI RBC DSAI, Dept Comp Sci & Engn, Chennai, Tamil Nadu, India
关键词
Deep learning; Filter pruning; Model compression; Convolutional neural networks;
D O I
10.1007/s00138-018-01001-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l(1)-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet.
引用
收藏
页码:203 / 216
页数:14
相关论文
empty
未找到相关数据