RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

被引:29
|
作者
Tan, Hao [1 ]
Cheng, Ran [1 ]
Huang, Shihua [1 ]
He, Cheng [1 ]
Qiu, Changxiao [2 ]
Yang, Fan [2 ]
Luo, Ping [3 ]
机构
[1] Southern Univ Sci & Technol, Univ Key Lab Evolving Intelligent Syst Guangdong, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[2] Huawei Technol Co Ltd, Hisilicon Res Dept, Shenzhen 518055, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Statistics; Sociology; Search problems; Optimization; Neural networks; Estimation; AutoML; convolutional neural network (CNN); neural architecture search (NAS); population-based search; slow-fast learning; NETWORKS;
D O I
10.1109/TNNLS.2021.3096658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable successes of convolutional neural networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various neural architecture search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast learners (i.e., decoded networks with relatively lower loss value) and slow learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast learner and slow learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: 1) it achieves state-of-the-art performances on ImageNet with top-1 error rate of 24.88%, that is, outperforming DARTS and AmoebaNet-B by 1.82% and 1.12%, respectively; 2) it spends only 9 h with a single 1080Ti GPU to obtain the discovered cells, that is, 3.75x and 7875x faster than DARTS and AmoebaNet, respectively; and 3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1% mAP on PASCAL VOC, 78.7% mIoU on Cityscapes, and 68.5% AP on MSCOCO, respectively. The implementation of RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS.
引用
收藏
页码:475 / 489
页数:15
相关论文
共 50 条
  • [1] Collaborative Neural Architecture Search for Personalized Federated Learning
    Liu, Yi
    Guo, Song
    Zhang, Jie
    Hong, Zicong
    Zhan, Yufeng
    Zhou, Qihua
    IEEE TRANSACTIONS ON COMPUTERS, 2025, 74 (01) : 250 - 262
  • [2] Accelerating Evolutionary Neural Architecture Search via Multifidelity Evaluation
    Yang, Shangshang
    Tian, Ye
    Xiang, Xiaoshu
    Peng, Shichen
    Zhang, Xingyi
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (04) : 1778 - 1792
  • [3] Neural Architecture Search via Proxy Validation
    Li, Yanxi
    Dong, Minjing
    Wang, Yunhe
    Xu, Chang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7595 - 7610
  • [4] Two-Stage Evolutionary Neural Architecture Search for Transfer Learning
    Wen, Yu-Wei
    Peng, Sheng-Hsuan
    Ting, Chuan-Kang
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2021, 25 (05) : 928 - 940
  • [5] Fast Search of Face Recognition Model for a Mobile Device Based on Neural Architecture Comparator
    Savchenko, Andrey. V. V.
    Savchenko, Lyudmila. V. V.
    Makarov, Ilya
    IEEE ACCESS, 2023, 11 : 65977 - 65990
  • [6] Designing Efficient DNNs via Hardware-Aware Neural Architecture Search and Beyond
    Luo, Xiangzhong
    Liu, Di
    Huai, Shuo
    Kong, Hao
    Chen, Hui
    Liu, Weichen
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (06) : 1799 - 1812
  • [7] Automatic Design of CNNs via Differentiable Neural Architecture Search for PolSAR Image Classification
    Dong, Hongwei
    Zou, Bin
    Zhang, Lamei
    Zhang, Siyu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2020, 58 (09): : 6362 - 6375
  • [8] Exploring the Intersection Between Neural Architecture Search and Continual Learning
    Shahawy, Mohamed
    Benkhelifa, Elhadj
    White, David
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [9] Reinforcement learning for neural architecture search: A review
    Jaafra, Yesmina
    Laurent, Jean Luc
    Deruyver, Aline
    Naceur, Mohamed Saber
    IMAGE AND VISION COMPUTING, 2019, 89 : 57 - 66
  • [10] Exploring Neural Architecture Search Space via Deep Deterministic Sampling
    Mills, Keith G.
    Salameh, Mohammad
    Niu, Di
    Han, Fred X.
    Rezaei, Seyed Saeed Changiz
    Yao, Hengshuai
    Lu, Wei
    Lian, Shuo
    Jui, Shangling
    IEEE ACCESS, 2021, 9 : 110962 - 110974