An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:0
作者
P Pabitha
Anusha Jayasimhan
机构
[1] Department of Computer Technology
[2] Madras Institute of Technology Campus
[3] Anna University
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; TP391.41 [];
学科分类号
080203 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentation, and many others. In image and video recognition applications, convolutional neural networks(CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output.Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 9 条
  • [1] Deep network compression based on partial least squares[J] Artur Jordao;Fernando Yamada;William Robson Schwartz Neurocomputing 2020,
  • [2] Acceleration of Deep Convolutional Neural Networks using Adaptive Filter Pruning[J] Pravendra Singh;Vinay Kumar Verma;Piyush Rai;Vinay P. Namboodiri IEEE Journal of Selected Topics in Signal Processing 2020,
  • [3] AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates[J] Ning Liu;Xiaolong Ma;Zhiyuan Xu;Yanzhi Wang;Jian Tang;Jieping Ye Proceedings of the AAAI Conference on Artificial Intelligence 2020,
  • [4] Accelerating Deep Learning Systems via Critical Set Identification and Model Compression[J] Han Rui;Liu Chi Harold;Li Shilin;Wen Shilin;Liu Xue IEEE Transactions on Computers 2020,
  • [5] ThiNet: Pruning CNN Filters for a Thinner Net.[J] Luo Jian-Hao;Zhang Hao;Zhou Hong-Yu;Xie Chen-Wei;Wu Jianxin;Lin Weiyao IEEE transactions on pattern analysis and machine intelligence 2019,
  • [6] Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks[J] He Yang;Dong Xuanyi;Kang Guoliang;Fu Yanwei;Yan Chenggang;Yang Yi IEEE Transactions on Cybernetics 2019,
  • [7] Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need.[J] Vighnesh Birodkar;Hossein Mobahi;Samy Bengio CoRR 2019,
  • [8] Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations[J] Hubara Itay;Courbariaux Matthieu;Soudry Daniel;El Yaniv Ran;Bengio Yoshua JOURNAL OF MACHINE LEARNING RESEARCH 2018,
  • [9] Rethinking Weight Decay for Efficient Neural Network Pruning[J] Tessier Hugo;Gripon Vincent;Léonardon Mathieu;Arzel Matthieu;Hannagan Thomas;Bertrand David Journal of Imaging 2022,