E-DNAS: Differentiable Neural Architecture Search for Embedded Systems

被引:8
作者
Garcia Lopez, Javier [1 ]
Agudo, Antonio [2 ]
Moreno-Noguer, Francesc [2 ]
机构
[1] FICOSA ADAS SLU, Barcelona 08232, Spain
[2] CSIC UPC, Inst Robot & Informat Ind, Barcelona 08028, Spain
来源
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2021年
关键词
Deep Learning; Neural Architecture Search; Convolutional Meta Kernels;
D O I
10.1109/ICPR48806.2021.9412130
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Designing optimal and light weight networks to fit in resource-limited platforms like mobiles, DSPs or GPUs is a challenging problem with a wide range of interesting applications, e.g. in embedded systems for autonomous driving. While most approaches are based on manual hyperparameter tuning, there exist a new line of research, the so-called NAS (Neural Architecture Search) methods, that aim to optimize several metrics during the design process, including memory requirements of the network, number of FLOPs, number of MACs (Multiply-ACcumulate operations) or inference latency. However, while NAS methods have shown very promising results, they are still significantly time and cost consuming. In this work we introduce E-DNAS, a differentiable architecture search method, which improves the efficiency of NAS methods in designing light-weight networks for the task of image classification. Concretely, E-DNAS computes, in a differentiable manner, the optimal size of a number of meta-kernels that capture patterns of the input data at different resolutions. We also leverage on the additive property of convolution operations to merge several kernels with different compatible sizes into a single one, reducing thus the number of operations and the time required to estimate the optimal configuration. We evaluate our approach on several datasets to perform classification. We report results in terms of the SoC (System on Chips) metric, typically used in the Texas Instruments TDA2x families for autonomous driving applications. The results show that our approach allows designing low latency architectures significantly faster than state-of-the-art.
引用
收藏
页码:4704 / 4711
页数:8
相关论文
共 50 条
  • [41] A Neural Architecture Search for Automated Multimodal
    Singh, Anuraj
    Nair, Haritha
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 207
  • [42] Operation-level Progressive Differentiable Architecture Search
    Zhu, Xunyu
    Li, Jian
    Liu, Yong
    Liao, Jun
    Wang, Weiping
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 1559 - 1564
  • [43] Neural Architecture Search: A Visual Analysis
    Ochoa, Gabriela
    Veerapen, Nadarajen
    PARALLEL PROBLEM SOLVING FROM NATURE - PPSN XVII, PPSN 2022, PT I, 2022, 13398 : 603 - 615
  • [44] A technical view on neural architecture search
    Yi-Qi Hu
    Yang Yu
    International Journal of Machine Learning and Cybernetics, 2020, 11 : 795 - 811
  • [45] A technical view on neural architecture search
    Hu, Yi-Qi
    Yu, Yang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (04) : 795 - 811
  • [46] Adaptive Channel Allocation for Robust Differentiable Architecture Search
    Li, Chao
    Ning, Jia
    Hu, Han
    He, Kun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [47] Efficient Global Neural Architecture Search
    Shahid Siddiqui
    Christos Kyrkou
    Theocharis Theocharides
    SN Computer Science, 6 (3)
  • [48] Differentiable neural architecture search augmented with pruning and multi-objective optimization for time-efficient intelligent fault diagnosis of machinery
    Zhang, Kaiyu
    Chen, Jinglong
    He, Shuilong
    Xu, Enyong
    Li, Fudong
    Zhou, Zitong
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2021, 158
  • [49] Designing resource-constrained neural networks using neural architecture search targeting embedded devices
    Cassimon, Amber
    Vanneste, Simon
    Bosmans, Stig
    Mercelis, Siegfried
    Hellinckx, Peter
    INTERNET OF THINGS, 2020, 12
  • [50] Neural Network Design: Learning from Neural Architecture Search
    van Stein, Bas
    Wang, Hao
    Back, Thomas
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 1341 - 1349