BaLeNAS: Differentiable Architecture Search via the Bayesian Learning Rule

被引:6
作者
Zhang, Miao [1 ]
Pan, Shirui [2 ]
Chang, Xiaojun [3 ,4 ]
Su, Steven [5 ]
Hu, Jilin [1 ]
Haffari, Gholamreza [2 ]
Yang, Bin [1 ]
机构
[1] Aalborg Univ, Aalborg, Denmark
[2] Monash Univ, Clayton, Vic, Australia
[3] UTS, AAII, ReLER, Sydney, NSW, Australia
[4] RMIT Univ, Melbourne, Vic, Australia
[5] Shandong First Med Univ, Tai An, Shandong, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
澳大利亚研究理事会;
关键词
D O I
10.1109/CVPR52688.2022.01157
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost through weight sharing and continuous relaxation. However, more recent works find that existing differentiable NAS techniques struggle to outperform naive baselines, yielding deteriorative architectures as the search proceeds. Rather than directly optimizing the architecture parameters, this paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions. By leveraging the natural-gradient variational inference (NGVI), the architecture distribution can be easily optimized based on existing codebases without incurring more memory and computational consumption. We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability. The experimental results on NAS benchmark datasets confirm the significant improvements the proposed framework can make. In addition, instead of simply applying the argmax on the learned parameters, we further leverage the recently-proposed training-free proxies in NAS to select the optimal architecture from a group architectures drawn from the optimized distribution, where we achieve state-of-the-art results on the NAS-Bench-201 and NAS-Bench-1shot1 benchmarks. Our best architecture in the DARTS search space also obtains competitive test errors with 2.37%, 15.72%, and 24.2% on CIFAR-10, CIFAR-100, and ImageNet, respectively.
引用
收藏
页码:11861 / 11870
页数:10
相关论文
共 56 条
[1]  
Abdelfattah Mohamed S, 2021, ICLR
[2]  
[Anonymous], 2020, ICML
[3]  
[Anonymous], 2019, ICML
[4]  
Bender G, 2018, PR MACH LEARN RES, V80
[5]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[6]  
Blundell C, 2015, PR MACH LEARN RES, V37, P1613
[7]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[8]  
Cai H, 2019, INT C LEARN REPR
[9]  
Chen W., 2021, ICLR
[10]  
Chen X., 2021, ICLR