GANDSE: Generative Adversarial Network-based Design Space Exploration for Neural Network Accelerator Design

被引:5
作者
Feng, Lang [1 ]
Liu, Wenjian [1 ]
Guo, Chuliang [2 ]
Tang, Ke [1 ]
Zhuo, Cheng [2 ,3 ]
Wang, Zhongfeng [1 ]
机构
[1] Nanjing Univ, 163 Xianlin Rd, Nanjing 210023, Peoples R China
[2] Zhejiang Univ, 866 Yuhangtang Rd, Hangzhou 310058, Peoples R China
[3] Key Lab Collaborat Sensing & Autonomous Unmanned, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Design space exploration; generative adversarial networks;
D O I
10.1145/3570926
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the popularity of deep learning, the hardware implementation platform of deep learning has received increasing interest. Unlike the general purpose devices, e.g., CPU or GPU, where the deep learning algorithms are executed at the software level, neural network hardware accelerators directly execute the algorithms to achieve higher energy efficiency and performance improvements. However, as the deep learning algorithms evolve frequently, the engineering effort and cost of designing the hardware accelerators are greatly increased. To improve the design quality while saving the cost, design automation for neural network accelerators was proposed, where design space exploration algorithms are used to automatically search the optimized accelerator design within a design space. Nevertheless, the increasing complexity of the neural network accelerators brings the increasing dimensions to the design space. As a result, the previous design space exploration algorithms are no longer effective enough to find an optimized design. In this work, we propose a neural network accelerator design automation framework named GANDSE, where we rethink the problem of design space exploration, and propose a novel approach based on the generative adversarial network (GAN) to support an optimized exploration for high-dimension large design space. The experiments showthat GANDSE is able to find the more optimized designs in negligible time compared with approaches including multilayer perceptron and deep reinforcement learning.
引用
收藏
页数:20
相关论文
共 24 条
[1]   A Holistic Approach for Optimizing DSP Block Utilization of a CNN implementation on FPGA [J].
Abdelouahab, Kamel ;
Bourrasset, Cedric ;
Pelcat, Maxime ;
Berry, Francois ;
Quinton, Jean-Charles ;
Serot, Jocelyn .
ICDSC 2016: 10TH INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERA, 2016, :69-75
[2]  
[Anonymous], 2006, 10 INT WORKSHOP FRON
[3]   Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks [J].
Chen, Yu-Hsin ;
Krishna, Tushar ;
Emer, Joel S. ;
Sze, Vivienne .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) :127-138
[4]  
Chetlur S, 2014, Arxiv, DOI arXiv:1410.0759
[5]  
DnnWeaver, 2016, DNNWEAVER V20
[6]  
Genc H, 2021, Arxiv, DOI arXiv:1911.09925
[7]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[8]   Caffe: Convolutional Architecture for Fast Feature Embedding [J].
Jia, Yangqing ;
Shelhamer, Evan ;
Donahue, Jeff ;
Karayev, Sergey ;
Long, Jonathan ;
Girshick, Ross ;
Guadarrama, Sergio ;
Darrell, Trevor .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :675-678
[9]   ConfuciuX: Autonomous Hardware Resource Assignment for DNN Accelerators using Reinforcement Learning [J].
Kao, Sheng-Chun ;
Jeong, Geonhwa ;
Krishna, Tushar .
2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, :622-636
[10]   NAAS: Neural Accelerator Architecture Search [J].
Lin, Yujun ;
Yang, Mengtian ;
Han, Song .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :1051-1056