The Impact of Analog-to-Digital Converter Architecture and Variability on Analog Neural Network Accuracy

被引:2
作者
Spear, Matthew [1 ,2 ]
Kim, Joshua E. [1 ]
Bennett, Christopher H. [2 ]
Agarwal, Sapan [2 ]
Marinella, Matthew J. [1 ]
Xiao, T. Patrick [2 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85287 USA
[2] Sandia Natl Labs, Albuquerque, NM 87123 USA
来源
IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS | 2023年 / 9卷 / 02期
关键词
Analog computing; analog-to-digital conversion; in-memory computing (IMC); machine learning; neural network; process variations; MEMORY;
D O I
10.1109/JXCDC.2023.3315134
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The analog-to-digital converter (ADC) is not only a key component in analog in-memory computing (IMC) accelerators but also a bottleneck for the efficiency and accuracy of these systems. While the tradeoffs between power consumption, latency, and area in ADC design are well studied, it is relatively unknown which ADC implementations are optimal for algorithmic accuracy, particularly for neural network inference. We explore the design space of the ADC with a focus on accuracy, investigating the sensitivity of neural network outputs to component variability inside the ADC and how this sensitivity depends on the ADC architecture. The compact models of the pipeline, cyclic, successive-approximation-register (SAR) and ramp ADCs are developed, and these models are used in a system-level accuracy simulation of analog neural network inference. Our results show how the accuracy on a complex image recognition benchmark (ResNet50 on ImageNet) depends on the capacitance mismatch, comparator offset, and effective number of bits (ENOB) for each of the four ADC architectures. We find that robustness to component variations depends strongly on the ADC design and that inference accuracy is particularly sensitive to the value-dependent error characteristics of the ADC, which cannot be captured by the conventional ENOB precision metric.
引用
收藏
页码:176 / 184
页数:9
相关论文
共 29 条
[1]  
[Anonymous], Mlperf inference imagenet calibration set
[2]  
Chen A, 2013, INT INTEG REL WRKSP, P187
[3]  
Dong Q, 2020, ISSCC DIG TECH PAP I, P242, DOI [10.1109/ISSCC19947.2020.9062985, 10.1109/isscc19947.2020.9062985]
[4]   Charge-mode parallel architecture for vector-matrix multiplication [J].
Genov, R ;
Cauwenberghs, G .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING, 2001, 48 (10) :930-936
[5]  
Ghoshal P, 2016, 2016 INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL POWER AND INSTRUMENTATION (ICICPI), P65, DOI 10.1109/ICICPI.2016.7859675
[6]  
Gonugondla S. K., 2020, P IEEE ACM INT C COM, P1
[7]   LINEAR ELECTRONIC ANALOG-DIGITAL CONVERSION ARCHITECTURES, THEIR ORIGINS, PARAMETERS, LIMITATIONS, AND APPLICATIONS [J].
GORDON, BM .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1978, 25 (07) :391-418
[8]   Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference [J].
Jacob, Benoit ;
Kligys, Skirmantas ;
Chen, Bo ;
Zhu, Menglong ;
Tang, Matthew ;
Howard, Andrew ;
Adam, Hartwig ;
Kalenichenko, Dmitry .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2704-2713
[9]   Analog-to-Digital Converter Design Exploration for Compute-in-Memory Accelerators [J].
Jiang, Hongwu ;
Li, Wantong ;
Huang, Shanshi ;
Cosemans, Stefan ;
Catthoor, Francky ;
Yu, Shimeng .
IEEE DESIGN & TEST, 2022, 39 (02) :48-55
[10]   Accurate deep neural network inference using computational phase-change memory [J].
Joshi, Vinay ;
Le Gallo, Manuel ;
Haefeli, Simon ;
Boybat, Irem ;
Nandakumar, S. R. ;
Piveteau, Christophe ;
Dazzi, Martino ;
Rajendran, Bipin ;
Sebastian, Abu ;
Eleftheriou, Evangelos .
NATURE COMMUNICATIONS, 2020, 11 (01)