Energy Efficient Techniques using FFT for Deep Convolutional Neural Networks

被引:0
|
作者
Nhan Nguyen-Thanh [1 ]
Han Le-Duc [1 ]
Duc-Tuyen Ta [1 ]
Van-Tam Nguyen [1 ,2 ]
机构
[1] Univ Paris Saclay, CNRS, LTCI, Telecom ParisTech, F-75013 Paris, France
[2] Stanford Univ, Dept Elect Engn, Stanford, CA 94305 USA
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep convolutional neural networks (CNNs) has been developed for a wide range of applications such as image recognition, nature language processing, etc. However, the deployment of deep CNNs in home and mobile devices remains challenging due to substantial requirements for computing resources and energy needed for the computation of high-dimensional convolutions. In this paper, we propose a novel approach designed to minimize energy consumption in the computation of convolutions in deep CNNs. The proposed solution includes (i) an optimal selection method for Fast Fourier Transform (FFT) configuration associated with splitting input feature maps, (ii) a reconfigurable hardware architecture for computing high-dimensional convolutions based on 2D-FFT, and (iii) an optimal pipeline data movement scheduling. The FFT size selecting method enables us to determine the optimal length of the split input for the lowest energy consumption. The hardware architecture contains a processing engine (PE) array, whose PEs are connected to form parallel flexible-length Radix-2 single-delay feedback lines, enabling the computation of variable-size 2D-FFT. The pipeline data movement scheduling optimizes the transition between row-wise FFT and column-wise FFT in a 2D-FFT process and minimizes the required data access for the element-wise accumulation across input channels. Using simulations, we demonstrated that the proposed framework improves the energy consumption by 89.7% in the inference case.
引用
收藏
页码:231 / 236
页数:6
相关论文
共 50 条
  • [1] An Efficient Accelerator for Deep Convolutional Neural Networks
    Kuo, Yi-Xian
    Lai, Yeong-Kang
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [2] A Pipelined Energy-efficient Hardware Accelaration for Deep Convolutional Neural Networks
    Alaeddine, Hmidi
    Jihene, Malek
    2019 IEEE INTERNATIONAL CONFERENCE ON DESIGN & TEST OF INTEGRATED MICRO & NANO-SYSTEMS (DTS), 2019,
  • [3] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Yongqiang Cao
    Yang Chen
    Deepak Khosla
    International Journal of Computer Vision, 2015, 113 : 54 - 66
  • [4] Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
    Chen, Yu-Hsin
    Krishna, Tushar
    Emer, Joel S.
    Sze, Vivienne
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) : 127 - 138
  • [5] Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
    Chen, Yu-Hsin
    Krishna, Tushar
    Emer, Joel
    Sze, Vivienne
    2016 IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE (ISSCC), 2016, 59 : 262 - U363
  • [6] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Cao, Yongqiang
    Chen, Yang
    Khosla, Deepak
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 54 - 66
  • [7] Energy Propagation in Deep Convolutional Neural Networks
    Wiatowski, Thomas
    Grohs, Philipp
    Boelcskei, Helmut
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (07) : 4819 - 4842
  • [8] Evolving Energy Efficient Convolutional Neural Networks
    Young, Steven R.
    Johnston, J. Travis
    Schuman, Catherine D.
    Devineni, Pravallika
    Kay, Bill
    Rose, Derek C.
    Parsa, Maryam
    Patton, Robert M.
    Potok, Thomas E.
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 4479 - 4485
  • [9] Efficient Incremental Training for Deep Convolutional Neural Networks
    Tao, Yudong
    Tu, Yuexuan
    Shyu, Mei-Ling
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 286 - 291
  • [10] Space Efficient Quantization for Deep Convolutional Neural Networks
    Zhao, Dong-Di
    Li, Fan
    Sharif, Kashif
    Xia, Guang-Min
    Wang, Yu
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2019, 34 (02) : 305 - 317