Reliability evaluation of FPGA based pruned neural networks

被引:7
|
作者
Gao, Zhen [1 ]
Yao, Yi [1 ]
Wei, Xiaohui [1 ]
Yan, Tong [1 ]
Zeng, Shulin [2 ]
Ge, Guangjun [2 ]
Wang, Yu [2 ]
Ullah, Anees [3 ]
Reviriego, Pedro [4 ]
机构
[1] Tianjin Univ, Tianjin 300072, Peoples R China
[2] Tsinghua Univ, Sch Elect Engn, Beijing 100084, Peoples R China
[3] Univ Engn & Technol, Peshawar 220101, Abbottabad, Pakistan
[4] Univ Carlos III Madrid, Leganes 28911, Spain
基金
中国国家自然科学基金;
关键词
Convolutional Neural Networks (CNNs); Pruning; Reliability; FPGAs; Fault injection; RADIATION;
D O I
10.1016/j.microrel.2022.114498
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Convolutional Neural Networks (CNNs) are widely used for image classification. To fit the implementation of CNNs on resource-limited systems like FPGAs, pruning is a popular technique to reduce the complexity. In this paper, the robustness of the pruned CNNs against errors on weights and configuration memory of the FPGA accelerator is evaluated with VGG16 as a case study, and two popular pruning methods (magnitude-based and filter pruning) are considered. In particular, the accuracy loss of the original VGG16 and the ones with different pruning rates is tested based on fault injection experiments, and the results show that the effect of errors on weights and configuration memories are different for the two pruning methods. For errors on weights, the networks pruned using both methods demonstrate higher reliability with higher pruning rates, but the ones using filter pruning are relatively less reliable. For errors on configuration memory, errors on about 30% of the configuration bits will affect the CNN operation, and only 14% of them will introduce significant accuracy loss. However, the effect of the same critical bits is different for the two pruning methods. The pruned networks using magnitude-based method are less reliable than the original VGG16, but the ones using filter pruning are more reliable than the original VGG16. The different effects are explained based on the structure of the CNN accelerator and the properties of the two pruning methods. The impact of quantization on the CNN reliability is also evaluated for the magnitude-based pruning method.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] An FPGA-Based Processor for Training Convolutional Neural Networks
    Liu, Zhiqiang
    Dou, Yong
    Jiang, Jingfei
    Wang, Qiang
    Chow, Paul
    2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, : 207 - 210
  • [42] Design of Convolutional Neural Networks Hardware Acceleration Based on FPGA
    Qin H.
    Cao Q.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2019, 41 (11): : 2599 - 2605
  • [43] Implementation of FPGA-based Accelerator for Deep Neural Networks
    Tsai, Tsung-Han
    Ho, Yuan-Chen
    Sheu, Ming-Hwa
    2019 IEEE 22ND INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS (DDECS), 2019,
  • [44] A survey of FPGA-based accelerators for convolutional neural networks
    Mittal, Sparsh
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (04): : 1109 - 1139
  • [45] A survey of FPGA-based accelerators for convolutional neural networks
    Sparsh Mittal
    Neural Computing and Applications, 2020, 32 : 1109 - 1139
  • [46] FPGA-Based Stochastic Activity Networks for Online Reliability Monitoring
    Garro, Unai
    Muxika, Enaut
    Ignacio Aizpurua, Jose
    Mendicute, Mikel
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2020, 67 (06) : 5000 - 5011
  • [47] On Reliability Evaluation of Hub-Based Networks
    Chen, Shin-Guang
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT TECHNOLOGIES AND ENGINEERING SYSTEMS (ICITES2013), 2014, 293 : 1147 - 1153
  • [48] Air quality prediction in Milan: feed-forward neural networks, pruned neural networks and lazy learning
    Corani, G
    ECOLOGICAL MODELLING, 2005, 185 (2-4) : 513 - 529
  • [49] Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model Training
    Singh, Siddharth
    Bhatele, Abhinav
    2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS, 2023, : 245 - 255
  • [50] On the Similarity between Hidden Layers of Pruned and Unpruned Convolutional Neural Networks
    Ansuini, Alessio
    Medvet, Eric
    Pellegrino, Felice Andrea
    Zullich, Marco
    ICPRAM: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2020, : 52 - 59