ModuleNet: Knowledge-Inherited Neural Architecture Search

被引:15
|
作者
Chen, Yaran [1 ,2 ]
Gao, Ruiyuan [3 ]
Liu, Fenggang [4 ]
Zhao, Dongbin [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Coll Artificial Intelligence, Beijing 100049, Peoples R China
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Beijing Inst Technol, Coll Automat, Beijing 100811, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Task analysis; Knowledge based systems; Microprocessors; Statistics; Sociology; Computational modeling; Evaluation algorithm; knowledge inherited; neural architecture search (NAS); GENETIC ALGORITHM; MODEL;
D O I
10.1109/TCYB.2021.3078573
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although neural the architecture search (NAS) can bring improvement to deep models, it always neglects precious knowledge of existing models. The computation and time costing property in NAS also means that we should not start from scratch to search, but make every attempt to reuse the existing knowledge. In this article, we discuss what kind of knowledge in a model can and should be used for a new architecture design. Then, we propose a new NAS algorithm, namely, ModuleNet, which can fully inherit knowledge from the existing convolutional neural networks. To make full use of the existing models, we decompose existing models into different modules, which also keep their weights, consisting of a knowledge base. Then, we sample and search for a new architecture according to the knowledge base. Unlike previous search algorithms, and benefiting from inherited knowledge, our method is able to directly search for architectures in the macrospace by the NSGA-II algorithm without tuning parameters in these modules. Experiments show that our strategy can efficiently evaluate the performance of a new architecture even without tuning weights in convolutional layers. With the help of knowledge we inherited, our search results can always achieve better performance on various datasets (CIFAR10, CIFAR100, and ImageNet) over original architectures.
引用
收藏
页码:11661 / 11671
页数:11
相关论文
共 50 条
  • [41] NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing
    Klyuchnikov, Nikita
    Trofimov, Ilya
    Artemova, Ekaterina
    Salnikov, Mikhail
    Fedorov, Maxim
    Filippov, Alexander
    Burnaev, Evgeny
    IEEE ACCESS, 2022, 10 : 45736 - 45747
  • [42] Multiobjective Reinforcement Learning-Based Neural Architecture Search for Efficient Portrait Parsing
    Lyu, Bo
    Wen, Shiping
    Shi, Kaibo
    Huang, Tingwen
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (02) : 1158 - 1169
  • [43] NAS-TasNet: Neural Architecture Search for Time-Domain Speech Separation
    Lee, Joo-Hyun
    Chang, Joon-Hyuk
    Yang, Jae-Mo
    Moon, Han-Gil
    IEEE ACCESS, 2022, 10 : 56031 - 56043
  • [44] Universal Binary Neural Networks Design by Improved Differentiable Neural Architecture Search
    Tan, Menghao
    Gao, Weifeng
    Li, Hong
    Xie, Jin
    Gong, Maoguo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9153 - 9165
  • [45] Real-Time Federated Evolutionary Neural Architecture Search
    Zhu, Hangyu
    Jin, Yaochu
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2022, 26 (02) : 364 - 378
  • [46] CommGNAS: Unsupervised Graph Neural Architecture Search for Community Detection
    Gao, Jianliang
    Chen, Jiamin
    Oloulade, Babatounde Moctard
    Al-Sabri, Raeed
    Lyu, Tengfei
    Zhang, Ji
    Li, Zhao
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2024, 12 (02) : 444 - 454
  • [47] Channel Shuffle Neural Architecture Search for Key Word Spotting
    Lee, Bokyeung
    Kim, Donghyeon
    Kim, Gwantae
    Ko, Hanseok
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 443 - 447
  • [48] Exploring the Intersection Between Neural Architecture Search and Continual Learning
    Shahawy, Mohamed
    Benkhelifa, Elhadj
    White, David
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [49] FINCH: Enhancing Federated Learning With Hierarchical Neural Architecture Search
    Liu, Jianchun
    Yan, Jiaming
    Xu, Hongli
    Wang, Zhiyuan
    Huang, Jinyang
    Xu, Yang
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 6012 - 6026
  • [50] Fisher Task Distance and its Application in Neural Architecture Search
    Le, Cat P.
    Soltani, Mohammadreza
    Dong, Juncheng
    Tarokh, Vahid
    IEEE ACCESS, 2022, 10 : 47235 - 47249