Special Topic on Energy-Efficient Compute-in-Memory With Emerging Devices

被引:0
作者
Seo, Jae-Sun [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85298 USA
来源
IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS | 2022年 / 8卷 / 02期
关键词
D O I
10.1109/JXCDC.2022.3231764
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have shown extraordinary performance in recent years for various applications including image classification, object detection, speech recognition, natural language processing, etc. Accuracydriven DNN architectures tend to increase the model sizes and computations at a very fast pace, demanding a massive amount of hardware resources. Frequent communication between the processing engine and the ON-/OFF-chip memory leads to high energy consumption, which becomes a bottleneck for the conventional DNN accelerator design.
引用
收藏
页数:3
相关论文
empty
未找到相关数据