Fault Resilience Techniques for Flash Memory of DNN Accelerators

被引:8
|
作者
Lu, Shyue-Kung [1 ]
Wu, Yu-Sheng [1 ]
Hong, Jin-Hua [2 ]
Miyase, Kohei [3 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Taipei 10607, Taiwan
[2] Natl Univ Kaohsiung, Kaohsiung, Taiwan
[3] Kyushu Inst Technol, Iizuka, Fukuoka, Japan
关键词
D O I
10.1109/ITCAsia55616.2022.00011
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) are being widely used in smart appliances, face recognition and autonomous driving. The trained weight data are usually stored in flash memory which suffers from reliability and endurance issues. Owing to the inherent error tolerability for DNN applications, address remapping techniques are proposed for protecting weight data stored in flash memory. Bit significances are first analyzed and then a weight transposer is proposed for remapping significant weight bits to fault-free or much reliable flash cells. A bipartite graph model is developed for modeling address remapping. The corresponding hardware architectures for address remapping are also proposed. We use the deep learning framework pytorch for evaluating inference accuracy for different DNN models. Experimental results show that based on 0.01 % injected BER in the weight data, the accuracy losses of widely used DNN models are less than 1 % with negligible hardware overhead.
引用
收藏
页码:1 / 6
页数:6
相关论文
共 50 条
  • [1] Fault Resilience Techniques for Flash Memory of DNN Accelerators
    Lu, Shyue-Kung
    Wu, Yu-Sheng
    Hong, Jin-Hua
    Miyase, Kohei
    2022 IEEE INTERNATIONAL TEST CONFERENCE (ITC), 2022, : 591 - 600
  • [2] Fault Resilience of DNN Accelerators for Compressed Sensor Inputs
    Arunachalam, Ayush
    Kundu, Shamik
    Raha, Arnab
    Banerjee, Suvadeep
    Basu, Kanad
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 329 - 332
  • [3] Special Session: Approximation and Fault Resiliency of DNN Accelerators
    Ahmadilivani, Mohammad Hasan
    Barbareschi, Mario
    Barone, Salvatore
    Bosio, Alberto
    Daneshtalab, Masoud
    Della Torca, Salvatore
    Gavarini, Gabriele
    Jenihhin, Maksim
    Raik, Jaan
    Ruospo, Annachiara
    Sanchez, Ernesto
    Taheri, Mahdi
    2023 IEEE 41ST VLSI TEST SYMPOSIUM, VTS, 2023,
  • [4] Increasing Throughput of In-Memory DNN Accelerators by Flexible Layerwise DNN Approximation
    De la Parra, Cecilia
    Soliman, Taha
    Guntoro, Andre
    Kumar, Akash
    Wehn, Norbert
    IEEE MICRO, 2022, 42 (06) : 17 - 24
  • [5] Shaped Pruning for Efficient Memory Addressing in DNN Accelerators
    Woo, Yunhee
    Kim, Dongyoung
    Jeong, Jaemin
    Lee, Jeong-Gun
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [6] FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization
    Hanif, Muhammad Abdullah
    Shafique, Muhammad
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] Analysis of Conventional, Near-Memory, and In-Memory DNN Accelerators
    Glint, Tom
    Jha, Chandan Kumar
    Awasthi, Manu
    Mekie, Joycee
    2023 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE, ISPASS, 2023, : 349 - 351
  • [8] AdAM: Adaptive Approximate Multiplier for Fault Tolerance in DNN Accelerators
    Taheri, Mahdi
    Cherezova, Natalia
    Nazari, Samira
    Azarpeyvand, Ali
    Ghasempouri, Tara
    Daneshtalab, Masoud
    Raik, Jaan
    Jenihhin, Maksim
    IEEE TRANSACTIONS ON DEVICE AND MATERIALS RELIABILITY, 2025, 25 (01) : 66 - 75
  • [9] API-Based Hardware Fault Simulation for DNN Accelerators
    Omland, Patrik
    Peng, Yang
    Paulitsch, Michael
    Parra, Jorge
    Espinosa, Gustavo
    Daniel, Abishai
    Hinz, Gereon
    Knoll, Alois
    IEEE DESIGN & TEST, 2023, 40 (02) : 75 - 81
  • [10] Benchmarking DNN Mapping Methods for the in-Memory Computing Accelerators
    Wang, Yimin
    Fong, Xuanyao
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (04) : 1040 - 1051