Regularized Differentiable Architecture Search

被引:0
作者
Wang, Lanfei [1 ,2 ]
Xie, Lingxi [3 ]
Zhao, Kaili [2 ]
Guo, Jun [2 ]
Tian, Qi [3 ]
机构
[1] Huawei Technol Co Ltd, Huawei Cloud & AI BG, Beijing 100085, Peoples R China
[2] Beijing Univ Posts & Telecommun, Pattern Recognit & Intelligent Syst Lab, Beijing 100876, Peoples R China
[3] Huawei Technol Co Ltd, Beijing 100085, Peoples R China
关键词
Computer architecture; Resource management; Training; Optimization; Market research; Topology; Stacking; Embedded systems; Deep learning; Computer vision; Machine learning; Neural networks; Automated machine learning; computer vision; deep learning; neural architecture search (NAS);
D O I
10.1109/LES.2022.3204856
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Differentiable architecture search (DARTS) transforms architectural optimization into a super network optimization by stacking two cells (2 c.). However, repeatedly stacking two cells is a suboptimal operation since cells in different depths should be various. Besides, we find that the performance is slightly improved by increasing the number of searched cells (e.g., from 2 c. to 5 c.), but it leads to uneven resource allocation. This letter proposes a regularized DARTS (RDARTS) to adjust the architectural differences and balance degrees of freedom and resource allocation. Specifically, we use separate architectural parameters for two reduction cells and three normal cells, and then propose an Reg distance to calculate the difference between cells. We design a new validation loss which is the weighting of cross-entropy and Reg loss and introduce an adaptive adjustment method. Results show that RDARTS achieves the top-1 accuracy of 97.64% and 75.8% on CIFAR and ImageNet.
引用
收藏
页码:129 / 132
页数:4
相关论文
共 18 条
  • [1] Bi KF, 2020, Arxiv, DOI arXiv:1910.11831
  • [2] Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild
    Chen, Xin
    Xie, Lingxi
    Wu, Jun
    Tian, Qi
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (03) : 638 - 655
  • [3] DeVries T, 2017, Arxiv, DOI [arXiv:1708.04552, DOI 10.48550/ARXIV.1708.04552]
  • [4] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [5] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269
  • [6] Liu H., 2018, P INT C LEARN REPR S, P1
  • [7] Pham H, 2018, PR MACH LEARN RES, V80
  • [8] Real E, 2019, AAAI CONF ARTIF INTE, P4780
  • [9] Tan MX, 2019, PROC CVPR IEEE, P2815, DOI [arXiv:1807.11626, 10.1109/CVPR.2019.00293]
  • [10] Evolutionary Recurrent Neural Architecture Search
    Tian, Shuo
    Hu, Kai
    Guo, Shasha
    Li, Shiming
    Wang, Lei
    Xu, Weixia
    [J]. IEEE EMBEDDED SYSTEMS LETTERS, 2021, 13 (03) : 110 - 113