RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

被引:29
|
作者
Tan, Hao [1 ]
Cheng, Ran [1 ]
Huang, Shihua [1 ]
He, Cheng [1 ]
Qiu, Changxiao [2 ]
Yang, Fan [2 ]
Luo, Ping [3 ]
机构
[1] Southern Univ Sci & Technol, Univ Key Lab Evolving Intelligent Syst Guangdong, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[2] Huawei Technol Co Ltd, Hisilicon Res Dept, Shenzhen 518055, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Statistics; Sociology; Search problems; Optimization; Neural networks; Estimation; AutoML; convolutional neural network (CNN); neural architecture search (NAS); population-based search; slow-fast learning; NETWORKS;
D O I
10.1109/TNNLS.2021.3096658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable successes of convolutional neural networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various neural architecture search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast learners (i.e., decoded networks with relatively lower loss value) and slow learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast learner and slow learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: 1) it achieves state-of-the-art performances on ImageNet with top-1 error rate of 24.88%, that is, outperforming DARTS and AmoebaNet-B by 1.82% and 1.12%, respectively; 2) it spends only 9 h with a single 1080Ti GPU to obtain the discovered cells, that is, 3.75x and 7875x faster than DARTS and AmoebaNet, respectively; and 3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1% mAP on PASCAL VOC, 78.7% mIoU on Cityscapes, and 68.5% AP on MSCOCO, respectively. The implementation of RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS.
引用
收藏
页码:475 / 489
页数:15
相关论文
共 50 条
  • [21] Proxy Data Generation for Fast and Efficient Neural Architecture Search
    Park, Minje
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2023, 18 (03) : 2307 - 2316
  • [22] Multimodal Continual Graph Learning with Neural Architecture Search
    Cai, Jie
    Wang, Xin
    Guan, Chaoyu
    Tang, Yateng
    Xu, Jin
    Zhong, Bin
    Zhu, Wenwu
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1292 - 1300
  • [23] Multi-Objective Neural Architecture Search for Efficient and Fast Semantic Segmentation on Edge
    Dou ZiWen
    Dong, Ye
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1346 - 1357
  • [24] Reinforcement Learning for Neural Architecture Search in Hyperspectral Unmixing
    Han, Zhu
    Hong, Danfeng
    Gao, Lianru
    Roy, Swalpa Kumar
    Zhang, Bing
    Chanussot, Jocelyn
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [25] FNA plus plus : Fast Network Adaptation via Parameter Remapping and Architecture Search
    Fang, Jiemin
    Sun, Yuzhu
    Zhang, Qian
    Peng, Kangjian
    Li, Yuan
    Liu, Wenyu
    Wang, Xinggang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (09) : 2990 - 3004
  • [26] CMQ: Crossbar-Aware Neural Network Mixed-Precision Quantization via Differentiable Architecture Search
    Peng, Jie
    Liu, Haijun
    Zhao, Zhongjin
    Li, Zhiwei
    Liu, Sen
    Li, Qingjiang
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (11) : 4124 - 4133
  • [27] A Fast Evolutionary Knowledge Transfer Search for Multiscale Deep Neural Architecture
    Zhang, Ruohan
    Jiao, Licheng
    Wang, Dan
    Liu, Fang
    Liu, Xu
    Yang, Shuyuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17450 - 17464
  • [28] Improving the Efficient Neural Architecture Search via Rewarding Modifications
    Gallo, Ignazio
    Magistrali, Gabriele
    Landro, Nicola
    La Grassa, Riccardo
    2020 35TH INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2020,
  • [29] From federated learning to federated neural architecture search: a survey
    Zhu, Hangyu
    Zhang, Haoyu
    Jin, Yaochu
    COMPLEX & INTELLIGENT SYSTEMS, 2021, 7 (02) : 639 - 657
  • [30] Scalable reinforcement learning-based neural architecture search
    Amber Cassimon
    Siegfried Mercelis
    Kevin Mets
    Neural Computing and Applications, 2025, 37 (1) : 231 - 261