A Design Space Exploration Framework for Deployment of Resource-Constrained Deep Neural Networks

被引:0
|
作者
Zhang, Yan [1 ]
Pan, Lei [1 ]
Berkowitz, Phillip [2 ]
Lee, Mun Wai [2 ]
Riggan, Benjamin [3 ]
Bhattacharyya, Shuvra S. [1 ]
机构
[1] Univ Maryland, College Pk, MD 20742 USA
[2] Intelligent Automat, Rockville, MD 20855 USA
[3] Univ Nebraska, Lincoln, NE 68588 USA
来源
REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2024 | 2024年 / 13034卷
关键词
Design space exploration; Deep Neural Networks; Dataflow Modeling; Resource-constrained deployment; PARTICLE SWARM OPTIMIZATION;
D O I
10.1117/12.3014043
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have witnessed great progress in the development of deep neural networks (DNNs), which has led to growing interest in deploying DNNs in resource-constrained environments such as network-edge and edge-cloud environments. To address objectives of efficient DNN inference, numerous approaches as well as specialized platforms have been designed for inference acceleration. The flexibility and diverse capabilities offered by these approaches and platforms result in large design spaces with complex trade-offs for DNN deployment. Relevant objectives involved in these trade-offs include inference accuracy, latency, throughput, memory requirements, and energy consumption. Tools that can effectively assist designers in deriving efficient DNN configurations for specific deployment scenarios are therefore needed. In this work, we present a design space exploration framework for this purpose. In the proposed framework, DNNs are represented as dataflow graphs using a lightweight-dataflow-based modeling tool, and schedules (strategies for managing processing resources across different DNN tasks) are modeled in a formal, abstract form using dataflow methods as well. The dataflow-based application and schedule representations are integrated systematically with a multiobjective particle swarm optimization (PSO) strategy, which enables efficient evaluation of implementation trade-offs and derivation of Pareto fronts involving alternative deployment configurations. Experimental results using different DNN architectures demonstrate the effectiveness of our proposed framework in exploring design spaces for DNN deployment.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Network-level Design Space Exploration of Resource-constrained Networks-of-Systems
    Zhao, Zhuoran
    Barijough, Kamyar Mirzazad
    Gerstlauer, Andreas
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2020, 19 (04)
  • [2] AERO: Design Space Exploration Framework for Resource-Constrained CNN Mapping on Tile-Based Accelerators
    Yang, Simei
    Bhattacharjee, Debjyoti
    Kumar, Vinay B. Y.
    Chatterjee, Saikat
    De, Sayandip
    Debacker, Peter
    Verkest, Diederik
    Mallik, Arindam
    Catthoor, Francky
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2022, 12 (02) : 508 - 521
  • [3] Selective Binarization based Architecture Design Methodology for Resource-constrained Computation of Deep Neural Networks
    Chandrapu, Ramesh Reddy
    Gyaneshwar, Dubacharla
    Channappayya, Sumohana
    Acharyya, Amit
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [4] Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems
    Matsubara, Yoshitomo
    Callegaro, Davide
    Baidya, Sabur
    Levorato, Marco
    Singh, Sameer
    IEEE ACCESS, 2020, 8 (08) : 212177 - 212193
  • [5] POSTER: Design Space Exploration for Performance Optimization of Deep Neural Networks on Shared Memory Accelerators
    Venkataramani, Swagath
    Choi, Jungwook
    Srinivasan, Vijayalakshmi
    Gopalakrishnan, Kailash
    Chang, Leland
    2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT), 2017, : 146 - 147
  • [6] Resource-constrained FPGA/DNN co-design
    Zhang, Zhichao
    Kouzani, Abbas Z.
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (21) : 14741 - 14751
  • [7] CSDSE: An efficient design space exploration framework for deep neural network accelerator based on cooperative search
    Feng, Kaijie
    Fan, Xiaoya
    An, Jianfeng
    Wang, Haoyang
    Li, Chuxi
    NEUROCOMPUTING, 2025, 623
  • [8] Deep Active Audio Feature Learning in Resource-Constrained Environments
    Mohaimenuzzaman, Md
    Bergmeir, Christoph
    Meyer, Bernd
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3224 - 3237
  • [9] ACCDSE: A Design Space Exploration Framework for Convolutional Neural Network Accelerator
    Li, Zhisheng
    Wang, Lei
    Dou, Qiang
    Tang, Yuxing
    Guo, Shasha
    Zhou, Haifang
    Lu, Wenyuan
    COMPUTER ENGINEERING AND TECHNOLOGY, NCCET 2017, 2018, 600 : 22 - 34
  • [10] Preliminary Application of Deep Learning to Design Space Exploration
    Roy, Kallol
    Mert, Hakki Torun
    Swaminathan, Madhavan
    2018 IEEE ELECTRICAL DESIGN OF ADVANCED PACKAGING AND SYSTEMS SYMPOSIUM (EDAPS 2018), 2018,