HR-NAS: Searching Efficient High-Resolution Neural Architectures with Lightweight Transformers

被引:40
作者
Ding, Mingyu [1 ,4 ]
Lian, Xiaochen [2 ]
Yang, Linjie [2 ]
Wang, Peng [2 ]
Jin, Xiaojie [2 ]
Lu, Zhiwu [3 ]
Luo, Ping [1 ]
机构
[1] Univ Hong Kong, Hong Kong, Peoples R China
[2] Bytedance Inc, Beijing, Peoples R China
[3] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
[4] Bytedance, Beijing, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
中国国家自然科学基金;
关键词
NETWORK;
D O I
10.1109/CVPR46437.2021.00300
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
High-resolution representations (HR) are essential for dense prediction tasks such as segmentation, detection, and pose estimation. Learning HR representations is typically ignored in previous Neural Architecture Search (NAS) methods that focus on image classification. This work proposes a novel NAS method, called HR-NAS, which is able to find efficient and accurate networks for different tasks, by effectively encoding multiscale contextual information while maintaining high-resolution representations. In HR-NAS, we renovate the NAS search space as well as its searching strategy. To better encode multiscale image contexts in the search space of HR-NAS, we first carefully design a lightweight transformer, whose computational complexity can be dynamically changed with respect to different objective functions and computation budgets. To maintain high-resolution representations of the learned networks, HR-NAS adopts a multi-branch architecture that provides convolutional encoding of multiple feature resolutions, inspired by HRNet [73]. Last, we proposed an efficient fine-grained search strategy to train HR-NAS, which effectively explores the search space, and finds optimal architectures given various tasks and computation resources. As shown in Fig.1 (a), HR-NAS is capable of achieving state-of-the-art tradeoffs between performance and FLOPs for three dense prediction tasks and an image classification task, given only small computational budgets. For example, HR-NAS surpasses SqueezeNAS [63] that is specially designed for semantic segmentation while improving efficiency by 45.9%.
引用
收藏
页码:2981 / 2991
页数:11
相关论文
共 93 条
  • [1] [Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00720
  • [2] [Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01298
  • [3] [Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01289
  • [4] [Anonymous], 2020, ICML
  • [5] [Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00934
  • [6] [Anonymous], 2019, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2019.00975
  • [7] [Anonymous], 2018, P MACHINE LEARNING R
  • [8] [Anonymous], 2019, ICML
  • [9] [Anonymous], 2018, NEURIPS
  • [10] Ba J., 2016, ARXIV160706450, V1050, P21