SPINDLE: SPINtronic Deep Learning Engine for Large-scale Neuromorphic Computing

被引:62
作者
Ramasubramanian, Shankar Ganesh [1 ]
Venkatesan, Rangharajan [1 ]
Sharad, Mrigank [1 ]
Roy, Kaushik [1 ]
Raghunathan, Anand [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
来源
PROCEEDINGS OF THE 2014 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED) | 2014年
关键词
Spintronics; Emerging Devices; Nanoelectronics; Post-CMOS; Neural Networks; Neuromorphic Computing;
D O I
10.1145/2627369.2627625
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep Learning Networks (DLNs) are bio-inspired large-scale neural networks that are widely used in emerging vision, analytics, and search applications. The high computation and storage requirements of DLNs have led to the exploration of various avenues for their efficient realization. Concurrently, the ability of emerging post-CMOS devices to efficiently mimic neurons and synapses has led to great interest in their use for neuromorphic computing. We describe SPINDLE, a programmable processor for deep learning based on spintronic devices. SPINDLE exploits the unique ability of spintronic devices to realize highly dense and energy-efficient neurons and memory, which form the fundamental building blocks of DLNs. SPINDLE consists of a three-tier hierarchy of processing elements to capture the nested parallelism present in DLNs, and a two-level memory hierarchy to facilitate data reuse. It can be programmed to execute DLNs with widely varying topologies for different applications. SPINDLE employs techniques to limit the overheads of spin-to-charge conversion, and utilizes output and weight quantization to enhance the efficiency of spin-neurons. We evaluate SPINDLE using a device-to-architecture modeling framework and a set of widely used DLN applications (handwriting recognition, face detection, and object recognition). Our results indicate that SPINDLE achieves 14.4X reduction in energy consumption and 20.4X reduction in EDP over the CMOS baseline under iso-area conditions.
引用
收藏
页码:15 / 20
页数:6
相关论文
共 23 条
[1]  
[Anonymous], IMPROVING PHOTO SEAR
[2]  
[Anonymous], P ASAP
[3]  
[Anonymous], 2010, NANO LETT
[4]  
[Anonymous], P DRC
[5]  
[Anonymous], BIGLEARN NIPS WORKSH
[6]  
[Anonymous], 2009, FDN TRENDS MACHINE L
[7]  
[Anonymous], 2011, Proceedings of the Conference on Neural Information Processing Systems Workshops
[8]  
[Anonymous], P IEDM
[9]  
Bergstra J., 2011, BIG LEARN WORKSH NIP
[10]   Advances and Future Prospects of Spin-Transfer Torque Random Access Memory [J].
Chen, E. ;
Apalkov, D. ;
Diao, Z. ;
Driskill-Smith, A. ;
Druist, D. ;
Lottis, D. ;
Nikitin, V. ;
Tang, X. ;
Watts, S. ;
Wang, S. ;
Wolf, S. A. ;
Ghosh, A. W. ;
Lu, J. W. ;
Poon, S. J. ;
Stan, M. ;
Butler, W. H. ;
Gupta, S. ;
Mewes, C. K. A. ;
Mewes, Tim ;
Visscher, P. B. .
IEEE TRANSACTIONS ON MAGNETICS, 2010, 46 (06) :1873-1878