RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

被引:29
|
作者
Tan, Hao [1 ]
Cheng, Ran [1 ]
Huang, Shihua [1 ]
He, Cheng [1 ]
Qiu, Changxiao [2 ]
Yang, Fan [2 ]
Luo, Ping [3 ]
机构
[1] Southern Univ Sci & Technol, Univ Key Lab Evolving Intelligent Syst Guangdong, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[2] Huawei Technol Co Ltd, Hisilicon Res Dept, Shenzhen 518055, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Statistics; Sociology; Search problems; Optimization; Neural networks; Estimation; AutoML; convolutional neural network (CNN); neural architecture search (NAS); population-based search; slow-fast learning; NETWORKS;
D O I
10.1109/TNNLS.2021.3096658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable successes of convolutional neural networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various neural architecture search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast learners (i.e., decoded networks with relatively lower loss value) and slow learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast learner and slow learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: 1) it achieves state-of-the-art performances on ImageNet with top-1 error rate of 24.88%, that is, outperforming DARTS and AmoebaNet-B by 1.82% and 1.12%, respectively; 2) it spends only 9 h with a single 1080Ti GPU to obtain the discovered cells, that is, 3.75x and 7875x faster than DARTS and AmoebaNet, respectively; and 3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1% mAP on PASCAL VOC, 78.7% mIoU on Cityscapes, and 68.5% AP on MSCOCO, respectively. The implementation of RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS.
引用
收藏
页码:475 / 489
页数:15
相关论文
共 50 条
  • [41] Evolutionary multi-objective neural architecture search via depth equalization supernet
    Zou, Juan
    Liu, Yang
    Liu, Yuan
    Xia, Yizhang
    NEUROCOMPUTING, 2025, 633
  • [42] IGWO-SS: Improved Grey Wolf Optimization Based on Synaptic Saliency for Fast Neural Architecture Search in Computer Vision
    Arman, Shifat E.
    Deowan, Shamim Ahmed
    IEEE ACCESS, 2022, 10 : 67851 - 67869
  • [43] LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization
    Nasir, Muhammad U.
    Earle, Sam
    Cleghorn, Christopher W.
    James, Steven
    Togelius, Julian
    PROCEEDINGS OF THE 2024 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2024, 2024, : 1110 - 1118
  • [44] SSVEP-Based Emotion Recognition for IoT via Multiobjective Neural Architecture Search
    Du, Yipeng
    Liu, Jian
    Wang, Xiang
    Wang, Peng
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (21) : 21432 - 21443
  • [45] Editorial: Advances in Robots Trajectories Learning via Fast Neural Networks
    Rubio, Jose de Jesus
    Pan, Yongping
    Pieper, Jeff
    Chen, Mu-Yen
    Sossa Azuela, Juan Humberto
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [46] Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference
    Mundt, Martin
    Pliushch, Iuliia
    Ramesh, Visvanathan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3518 - 3527
  • [47] SCORCH: Neural Architecture Search and Hardware Accelerator Co-design with Reinforcement Learning
    Liu, Siqin
    Karanth, Avinash
    2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [48] Multi-Task Learning for Multi-Objective Evolutionary Neural Architecture Search
    Cai, Ronghong
    Luo, Jianping
    2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 1680 - 1687
  • [49] FastStereoNet: A Fast Neural Architecture Search for Improving the Inference of Disparity Estimation on Resource-Limited Platforms
    Loni, Mohammad
    Zoljodi, Ali
    Majd, Amin
    Ahn, Byung Hoon
    Daneshtalab, Masoud
    Sjodin, Mikael
    Esmaeilzadeh, Hadi
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (08): : 5222 - 5234
  • [50] IID-Net: Image Inpainting Detection Network via Neural Architecture Search and Attention
    Wu, Haiwei
    Zhou, Jiantao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1172 - 1185