Search-Efficient NAS: Neural Architecture Search for Classification

被引:8
作者
Rana, Amrita [1 ]
Kim, Kyung Ki [1 ]
机构
[1] Daegu Univ, Dept Elect Engn, Daegu, South Korea
来源
2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW | 2022年
基金
新加坡国家研究基金会;
关键词
Neural Architecture Search; DARTS; classification;
D O I
10.1109/ICDMW58026.2022.00048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been an increasing trend in automating the process of NAS. Most SOTA approaches required higher computational costs and hardware. Among all methods, the Differentiable Architecture Search (DARTS) made the automating process available within a few GPU days. However, the performance of DARTS is observed to struggle with memory issues and often collapsed when used a larger number of epochs while searching architectures. To overcome the issue of memory overhead and longer search time, the paper proposes a method by sampling the selected channels only, which not only reduces the redundant operations but will also minimize the search time. This method has reduced the GPU search hours drastically to 6.3 hours as compared to DARTS.
引用
收藏
页码:261 / 262
页数:2
相关论文
共 4 条
[1]  
Liu H.T.D., 2019, ICLR
[2]  
Pham H., 2018, MACH LEARN
[3]  
Real E, 2017, PR MACH LEARN RES, V70
[4]  
Zoph Barret, 2016, MACH LEARN