A sparse memory access architecture for digital neural network LSIs

被引:0
作者
Aihara, K
Fujita, O
Uchimura, K
机构
关键词
neural network; digital LSI; perceptron; data-driven;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A sparse memory access architecture which is proposed to achieve a high-computational-speed neural-network LSI is described in detail. This architecture uses two key techniques, compressible synapse-weight neuron calculation and differential neuron operation, to reduce the number of accesses to synapse weight memories and the number of neuron calculations without incurring an accuracy penalty. The test chip based on this architecture has 96 parallel data-driven processing units and enough memory for 12,288 synapse weights. In a pattern recognition example, the number of memory accesses and neuron calculations was reduced to 0.87% that needed in the conventional method and the practical performance was 18 GCPS. The sparse memory access architecture is also effective when the synapse weights are stored in off-chip memory.
引用
收藏
页码:996 / 1002
页数:7
相关论文
共 5 条
  • [1] AIHARA K, 1995, ISSCC DIG TECH PAP I, V38, P72, DOI 10.1109/ISSCC.1995.535281
  • [2] KONDO Y, 1994, ISSCC DIG TECH PAP I, V37, P218, DOI 10.1109/ISSCC.1994.344663
  • [3] NAKAHIRA H, 1993, 1993 SYMPOSIUM ON VLSI CIRCUITS, P35
  • [4] Rosenblatt F., 1961, CORNELL AERONAUTICAL LAB INC BUFFALO NY
  • [5] A HIGH-SPEED DIGITAL NEURAL NETWORK CHIP WITH LOW-POWER CHAIN-REACTION ARCHITECTURE
    UCHIMURA, K
    SAITO, O
    AMEMIYA, Y
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 1992, 27 (12) : 1862 - 1867