Analysis of Conventional, Near-Memory, and In-Memory DNN Accelerators

被引:2
作者
Glint, Tom [1 ]
Jha, Chandan Kumar [2 ]
Awasthi, Manu [3 ]
Mekie, Joycee [1 ]
机构
[1] IIT Gandhinagar, Palaj, Gujarat, India
[2] DFKI, Kaiserslautern, Germany
[3] Ashoka Univ, Sonipat, India
来源
2023 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE, ISPASS | 2023年
关键词
D O I
10.1109/ISPASS57527.2023.00049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Various DNN accelerators based on Conventional compute Hardware Accelerator (CHA), Near-Data-Processing (NDP) and Processing-in-Memory (PIM) paradigms have been proposed to meet the challenges of inferencing Deep Neural Networks (DNNs). To the best of our knowledge, this work aims to perform the first quantitative as well as qualitative comparison among the state-of-the-art accelerators from each digital DNN accelerator paradigm. Our study provides insights into selecting the best architecture for a given DNN workload. We have used workloads of the MLPerf Inference benchmark. We observe that for Fully Connected Layer (FCL) DNNs, PIM-based accelerator is 21x and 3x faster than CHA and NDP-based accelerator respectively. However, NDP is 9x and 2.5x more energy efficient than CHA and PIM for FCL. For Convolutional Neural Network (CNN) workloads, CHA is 10% and 5x faster than NDP and PIM-based accelerator respectively. Further, CHA is 1.5x and 6x more energy efficient than NDP and PIM-based accelerators respectively.
引用
收藏
页码:349 / 351
页数:3
相关论文
共 13 条
[1]   A Survey of Accelerator Architectures for Deep Neural Networks [J].
Chen, Yiran ;
Xie, Yuan ;
Song, Linghao ;
Chen, Fan ;
Tang, Tianqi .
ENGINEERING, 2020, 6 (03) :264-274
[2]  
Das P, 2022, DES AUT TEST EUROPE, P1017, DOI 10.23919/DATE54114.2022.9774636
[3]  
Devaux F., 2019, 2019 IEEE Hot Chips 31 Symposium (HCS), P1
[4]   TETRIS: Scalable and efficient neural network acceleration with 3D memory [J].
Gao M. ;
Pu J. ;
Yang X. ;
Horowitz M. ;
Kozyrakis C. .
ACM SIGPLAN Notices, 2017, 52 (04) :751-764
[5]  
Gomez-Luna J., 2021, BENCHMARKING NEW PAR, DOI DOI 10.48550/ARXIV.2105.03814
[6]  
Guo XC, 2010, CONF PROC INT SYMP C, P371, DOI 10.1145/1816038.1816012
[7]   Newton: A DRAM-maker's Accelerator-in-Memory (AiM) Architecture for Machine Learning [J].
He, Mingxuan ;
Song, Choungki ;
Kim, Ilkon ;
Jeong, Chunseok ;
Kim, Seho ;
Park, Il ;
Thottethodi, Mithuna ;
Vijaykumar, T. N. .
2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, :372-385
[8]  
Lee SC, 2022, INT J ENVIRON SCI TE, V19, P11023, DOI [10.1007/s13762-022-03911-8, 10.1109/IECON49645.2022.9968500]
[9]  
Mattson P., 2019, arXiv
[10]  
Oliveira GF, 2022, Arxiv, DOI arXiv:2105.03725