Multi-Objective Neural Architecture Search by Learning Search Space Partitions

被引:0
作者
Zhao, Yiyang [1 ]
Wang, Linnan [2 ]
Guo, Tian [1 ]
机构
[1] Worcester Polytech Inst, Worcester, MA 01609 USA
[2] Brown Univ, Providence, RI USA
基金
美国国家科学基金会;
关键词
Neural Architecture Search; Monte Carlo Tree Search; AutoML; Deep Learning; EVOLUTIONARY ALGORITHMS; OPTIMIZATION; DIVERSITY; NETWORK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deploying deep learning models requires taking into consideration neural network metrics such as model size, inference latency, and #FLOPs, aside from inference accuracy. This results in deep learning model designers leveraging multi -objective optimization to design effective deep neural networks in multiple criteria. However, applying multi -objective optimizations to neural architecture search (NAS) is nontrivial because NAS tasks usually have a huge search space, along with a non -negligible searching cost. This requires effective multi -objective search algorithms to alleviate the GPU costs. In this work, we implement a novel multi -objectives optimizer based on a recently proposed meta -algorithm called LaMOO Zhao et al. (2022) on NAS tasks. In a nutshell, LaMOO speedups the search process by learning a model from observed samples to partition the search space and then focusing on promising regions likely to contain a subset of the Pareto frontier. Using LaMOO , we observe an improvement of more than 200% sample efficiency compared to Bayesian optimization and evolutionary -based multi -objective optimizers on different NAS datasets. For example, when combined with LaMOO , qEHVI achieves a 225% improvement in sample efficiency compared to using qEHVI alone in NasBench201. For real -world tasks, LaMOO achieves 97.36% accuracy with only 1.62M #Params on CIFAR10 in only 600 search samples. On ImageNet, our large model reaches 80.4% top -1 accuracy with only 522M #FLOPs.
引用
收藏
页数:41
相关论文
共 94 条
  • [1] Akimoto Y, 2019, PR MACH LEARN RES, V97
  • [2] [Anonymous], Nvidia's drive orin
  • [3] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [4] Multiobjective GAs, quantitative indices, and pattern classification
    Bandyopadhyay, S
    Pal, SK
    Aruna, B
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (05): : 2088 - 2099
  • [5] Belakaria S, 2019, ADV NEUR IN, V32
  • [6] Bender G, 2018, PR MACH LEARN RES, V80
  • [7] Beume N, 2006, PROCEEDINGS OF THE SECOND IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P231
  • [8] Boerner T. J., 2023, P PRACT EXP ADV RES, P173, DOI [DOI 10.1145/3569951.3597559, 10.1145/3569951.3597559]
  • [9] The balance between proximity and diversity in multiobjective evolutionary algorithms
    Bosman, PAN
    Thierens, D
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2003, 7 (02) : 174 - 188
  • [10] Busoniu L, 2013, IEEE SYMP ADAPT DYNA, P69, DOI 10.1109/ADPRL.2013.6614991