Dimension fusion: Dimension-level dynamically composable accelerator for convolutional neural networks

被引:2
|
作者
Deng, Huipeng [1 ]
Wang, Jian [1 ]
Ye, Huafeng [2 ]
Xiao, Shanlin [2 ]
Meng, Xiangyu [1 ]
Yu, Zhiyi [2 ]
机构
[1] Sun Yat Sen Univ, Sch Elect & Informat Technol, Guangzhou 510006, Peoples R China
[2] Sun Yat Sen Univ, Sch Microelect Sci & Technol, Zhuhai 519082, Peoples R China
来源
IEICE ELECTRONICS EXPRESS | 2021年 / 18卷 / 24期
基金
中国国家自然科学基金;
关键词
dynamic fusion; Winograd algorithm; accelerators; convolutional neural networks; CNN ACCELERATOR; WINOGRAD; EFFICIENT;
D O I
10.1587/elex.18.20210491
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Convolutional neural networks (CNNs) have proven to be promising in various applications such as audio recognition, image classification, and video understanding. Winograd algorithm helps to reduce the complexity of computation in a convolution but suffers from poor compatibility for different convolution shapes. This work introduces a dynamic dimension-level fusion architecture based on Winograd for accelerating different dimensions of CNNs. We explore this Winograd architecture by designing Dimension Fusion, a dimension-level processing engine that dynamically fuses to match the convolution shape of individual CNN layers. The proposed architecture is the first work based on Winograd algorithm to be compatible with all convolution shapes (dimension, stride, and filter-size) and achieves highest PE efficiency up to 1.55x and energy efficiency up to 3.3x compared with the state-of-art accelerators.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Dimension Fusion: Dimension-Level Dynamically Composable Accelerator for Convolutional Neural Networks
    Deng, Huipeng
    Wang, Jian
    Ye, Huafeng
    Xiao, Shanlin
    Meng, Xiangyu
    Yu, Zhiyi
    IEICE ELECTRONICS EXPRESS, 2021,
  • [2] Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
    Sharma, Hardik
    Park, Jongse
    Suda, Naveen
    Lai, Liangzhen
    Chau, Benson
    Chandra, Vikas
    Esmaeilzadeh, Hadi
    2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 764 - 775
  • [3] The evaluation and analysis model on the dimension-level of the core competence
    Du, G
    Cui, T
    MANAGEMENT SCIENCES AND GLOBAL STRATEGIES IN THE 21ST CENTURY, VOLS 1 AND 2, 2004, : 1289 - 1294
  • [4] FRACTAL DIMENSION OF NEURAL NETWORKS
    MATSUBA, I
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1992, E75D (03) : 363 - 365
  • [5] Dimension expansion of neural networks
    Jung, E
    Lee, C
    IGARSS 2000: IEEE 2000 INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, VOL I - VI, PROCEEDINGS, 2000, : 678 - 680
  • [6] Analysis on Temporal Dimension of Inputs for 3D Convolutional Neural Networks
    Koepueklue, Okan
    Rigoll, Gerhard
    2018 IEEE THIRD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, APPLICATIONS AND SYSTEMS (IPAS), 2018, : 79 - 84
  • [7] FDAM: full-dimension attention module for deep convolutional neural networks
    Cai, Silin
    Wang, Changping
    Ding, Jiajun
    Yu, Jun
    Fan, Jianping
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2022, 11 (04) : 599 - 610
  • [8] FDAM: full-dimension attention module for deep convolutional neural networks
    Silin Cai
    Changping Wang
    Jiajun Ding
    Jun Yu
    Jianping Fan
    International Journal of Multimedia Information Retrieval, 2022, 11 : 599 - 610
  • [9] RNA: A Flexible and Efficient Accelerator Based on Dynamically Reconfigurable Computing for Multiple Convolutional Neural Networks
    Yang, Chen
    Hou, Jia
    Wang, Yizhou
    Zhang, Haibo
    Wang, Xiaoli
    Geng, Li
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (16)
  • [10] A Dynamically Configurable Coprocessor for Convolutional Neural Networks
    Chakradhar, Srimat
    Sankaradas, Murugan
    Jakkula, Venkata
    Cadambi, Srihari
    ISCA 2010: THE 37TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, 2010, : 247 - 257