Reliability evaluation of FPGA based pruned neural networks

被引:7
|
作者
Gao, Zhen [1 ]
Yao, Yi [1 ]
Wei, Xiaohui [1 ]
Yan, Tong [1 ]
Zeng, Shulin [2 ]
Ge, Guangjun [2 ]
Wang, Yu [2 ]
Ullah, Anees [3 ]
Reviriego, Pedro [4 ]
机构
[1] Tianjin Univ, Tianjin 300072, Peoples R China
[2] Tsinghua Univ, Sch Elect Engn, Beijing 100084, Peoples R China
[3] Univ Engn & Technol, Peshawar 220101, Abbottabad, Pakistan
[4] Univ Carlos III Madrid, Leganes 28911, Spain
基金
中国国家自然科学基金;
关键词
Convolutional Neural Networks (CNNs); Pruning; Reliability; FPGAs; Fault injection; RADIATION;
D O I
10.1016/j.microrel.2022.114498
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Convolutional Neural Networks (CNNs) are widely used for image classification. To fit the implementation of CNNs on resource-limited systems like FPGAs, pruning is a popular technique to reduce the complexity. In this paper, the robustness of the pruned CNNs against errors on weights and configuration memory of the FPGA accelerator is evaluated with VGG16 as a case study, and two popular pruning methods (magnitude-based and filter pruning) are considered. In particular, the accuracy loss of the original VGG16 and the ones with different pruning rates is tested based on fault injection experiments, and the results show that the effect of errors on weights and configuration memories are different for the two pruning methods. For errors on weights, the networks pruned using both methods demonstrate higher reliability with higher pruning rates, but the ones using filter pruning are relatively less reliable. For errors on configuration memory, errors on about 30% of the configuration bits will affect the CNN operation, and only 14% of them will introduce significant accuracy loss. However, the effect of the same critical bits is different for the two pruning methods. The pruned networks using magnitude-based method are less reliable than the original VGG16, but the ones using filter pruning are more reliable than the original VGG16. The different effects are explained based on the structure of the CNN accelerator and the properties of the two pruning methods. The impact of quantization on the CNN reliability is also evaluated for the magnitude-based pruning method.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Convolutional Neural Networks using FPGA-based Pipelining
    Ali G.A.
    Ali A.H.
    Iraqi Journal for Computer Science and Mathematics, 2023, 4 (02): : 215 - 223
  • [32] Design of Convolutional Neural Networks Hardware Acceleration Based on FPGA
    Qin Huabiao
    Cao Qinping
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (11) : 2599 - 2605
  • [33] Efficient Sigmoid Function for Neural Networks Based FPGA Design
    Chen, Xi
    Wang, Gaofeng
    Zhou, Wei
    Chang, Sheng
    Sun, Shilei
    INTELLIGENT COMPUTING, PART I: INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, ICIC 2006, PART I, 2006, 4113 : 672 - 677
  • [34] Optimisation of FPGA-Based Designs for Convolutional Neural Networks
    Bonifus, P. L.
    Thomas, Ann Mary
    Antony, Jobin K.
    SMART SENSORS MEASUREMENT AND INSTRUMENTATION, CISCON 2021, 2023, 957 : 209 - 221
  • [35] FPGA-Based Acceleration for Bayesian Convolutional Neural Networks
    Fan, Hongxiang
    Ferianc, Martin
    Que, Zhiqiang
    Liu, Shuanglong
    Niu, Xinyu
    Rodrigues, Miguel R. D.
    Luk, Wayne
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (12) : 5343 - 5356
  • [36] An Efficient FPGA-Based Architecture for Convolutional Neural Networks
    Hwang, Wen-Jyi
    Jhang, Yun-Jie
    Tai, Tsung-Ming
    2017 40TH INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2017, : 582 - 588
  • [37] Hardware Acceleration Design of Convolutional Neural Networks Based on FPGA
    Zhang, Guoning
    Hu, Jing
    Li, Laiquan
    Jiang, Haoyang
    2024 9TH INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY AND INFORMATION SCIENCE, ICETIS 2024, 2024, : 11 - 15
  • [38] FPGA implementation of evolvable Block-based Neural Networks
    Merchant, Saumil
    Peterson, Gregory D.
    Park, Sang Ki
    Kong, Seong G.
    2006 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-6, 2006, : 3114 - +
  • [39] An FPGA based simulation acceleration platform for spiking neural networks
    Hellmich, HH
    Klar, H
    2004 47TH MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOL II, CONFERENCE PROCEEDINGS, 2004, : 389 - 392
  • [40] Axonal Delay Controller for Spiking Neural Networks Based on FPGA
    Zapata, Mireya
    Madrenas, Jordi
    Zapata, Miroslava
    Alvarez, Jorge
    ADVANCES IN ARTIFICIAL INTELLIGENCE, SOFTWARE AND SYSTEMS ENGINEERING, 2020, 965 : 284 - 292