Structured Compression of Convolutional Neural Networks for Specialized Tasks

被引:0
作者
Gabbay, Freddy [1 ]
Salomon, Benjamin [1 ]
Shomron, Gil [2 ]
机构
[1] Ruppin Acad Ctr, Engn Fac, IL-4025000 Emek Hefer, Israel
[2] NVIDIA, Stuttgart, Germany
关键词
machine learning; deep neural networks; convolutional neural network; structured compression; BENCHMARK;
D O I
10.3390/math10193679
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Convolutional neural networks (CNNs) offer significant advantages when used in various image classification tasks and computer vision applications. CNNs are increasingly deployed in environments from edge and Internet of Things (IoT) devices to high-end computational infrastructures, such as supercomputers, cloud computing, and data centers. The growing amount of data and the growth in their model size and computational complexity, however, introduce major computational challenges. Such challenges present entry barriers for IoT and edge devices as well as increase the operational expenses of large-scale computing systems. Thus, it has become essential to optimize CNN algorithms. In this paper, we introduce the S-VELCRO compression algorithm, which exploits value locality to trim filters in CNN models utilized for specialized tasks. S-VELCRO uses structured compression, which can save costs and reduce overhead compared with unstructured compression. The algorithm runs in two steps: a preprocessing step identifies the filters with a high degree of value locality, and a compression step trims the selected filters. As a result, S-VELCRO reduces the computational load of the channel activation function and avoids the convolution computation of the corresponding trimmed filters. Compared with typical CNN compression algorithms that run heavy back-propagation training computations, S-VELCRO has significantly fewer computational requirements. Our experimental analysis shows that S-VELCRO achieves a compression-saving ratio between 6% and 30%, with no degradation in accuracy for ResNet-18, MobileNet-V2, and GoogLeNet when used for specialized tasks.
引用
收藏
页数:19
相关论文
共 69 条
[1]   Literature Review of Deep Network Compression [J].
Alqahtani, Ali ;
Xie, Xianghua ;
Jones, Mark W. .
INFORMATICS-BASEL, 2021, 8 (04)
[2]  
[Anonymous], WP499V10 XIL
[3]   Structured Pruning of Deep Convolutional Neural Networks [J].
Anwar, Sajid ;
Hwang, Kyuyeon ;
Sung, Wonyong .
ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2017, 13 (03)
[4]   Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification [J].
Bai, Yan ;
Liu, Jun ;
Lou, Yihang ;
Wang, Ce ;
Duan, Ling-yu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) :6854-6871
[5]  
Banner R., 2018, ARXIV
[6]   Benchmark Analysis of Representative Deep Neural Network Architectures [J].
Bianco, Simone ;
Cadene, Remi ;
Celona, Luigi ;
Napoletano, Paolo .
IEEE ACCESS, 2018, 6 :64270-64277
[7]  
Boone-Sifuentes T., 2020, P 2020 DIGITAL IMAGE, P1
[8]  
Bucilua Cristian, 2006, P 12 ACM SIGKDD INT, P535
[9]  
Cai H., 2019, arXiv
[10]   An iterative pruning algorithm for feedforward neural networks [J].
Castellano, G ;
Fanelli, AM ;
Pelillo, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (03) :519-531