APENAS: An Asynchronous Parallel Evolution Based Multi-objective Neural Architecture Search

被引:4
作者
Hu, Mengtao [1 ]
Liu, Li [1 ]
Wang, Wei [1 ]
Liu, Yao [1 ]
机构
[1] East China Normal Univ, Sch Data Sci & Engn, Shanghai, Peoples R China
来源
2020 IEEE INTL SYMP ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, INTL CONF ON BIG DATA & CLOUD COMPUTING, INTL SYMP SOCIAL COMPUTING & NETWORKING, INTL CONF ON SUSTAINABLE COMPUTING & COMMUNICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2020) | 2020年
关键词
automated machine learning; neural architecture search; multi-objective; asynchronous parallel evolution; NETWORKS;
D O I
10.1109/ISPA-BDCloud-SocialCom-SustainCom51426.2020.00045
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is widely used in pattern classification, image processing and speech recognition. Neural architecture search (NAS) could reduce the dependence of human experts on machine learning effectively. Due to the high complexity of NAS, the tradeoff between time consumption and classification accuracy is vital. This paper presents APENAS, an asynchronous parallel evolution based multi-objective neural architecture search, using the classification accuracy and the number of parameters as objectives, encoding the network architectures as individuals. To make full use of computing resource, we propose a multi-generation undifferentiated fusion scheme to achieve asynchronous parallel evolution on multiple GPUs or CPUs, which speeds up the process of NAS. Accordingly, we propose an election pool and a buffer pool for two-layer filtration of individuals. The individuals are sorted in the election pool by non-dominated sorting and filtered in the buffer pool by the roulette algorithm to improve the elitism of the Pareto front. APENAS is evaluated on the CIFAR-10 and CIFAR-100 datasets [25]. The experimental results demonstrate that APENAS achieves 90.05% accuracy on CIFAR-10 with only 0.07 million parameters, which is comparable to state of the art. Especially, APENAS has high parallel scalability, achieving 92.5% parallel efficiency on 64 nodes.
引用
收藏
页码:153 / 159
页数:7
相关论文
共 28 条
[21]   Genetic CNN [J].
Xie, Lingxi ;
Yuille, Alan .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1388-1397
[22]  
Xie S, 2019, P INT C LEARN REPR N
[23]  
Yao Liu, 2019, 2019 IEEE 21st International Conference on High Performance Computing and Communications
[24]  
IEEE 17th International Conference on Smart City
[25]  
IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). Proceedings, P54, DOI 10.1109/HPCC/SmartCity/DSS.2019.00023
[26]   Practical Block-wise Neural Network Architecture Generation [J].
Zhong, Zhao ;
Yan, Junjie ;
Wu, Wei ;
Shao, Jing ;
Liu, Cheng-Lin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2423-2432
[27]  
Zoph B, 2017, P INT C LEARN REPR
[28]   Learning Transferable Architectures for Scalable Image Recognition [J].
Zoph, Barret ;
Vasudevan, Vijay ;
Shlens, Jonathon ;
Le, Quoc V. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8697-8710