MobileNetV2: Inverted Residuals and Linear Bottlenecks

被引:14411
作者
Sandler, Mark [1 ]
Howard, Andrew [1 ]
Zhu, Menglong [1 ]
Zhmoginov, Andrey [1 ]
Chen, Liang-Chieh [1 ]
机构
[1] Google Inc, Mountain View, CA 94043 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00474
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet I classification, COCO object detection I, VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.
引用
收藏
页码:4510 / 4520
页数:11
相关论文
共 46 条
[1]  
Abadi M., 2015, TensorFlow: Large-scale machine learning on heterogeneous systems.
[2]  
[Anonymous], 2017, CORR
[3]  
[Anonymous], 2016, CORR
[4]  
[Anonymous], 2014, ECCV
[5]  
[Anonymous], CORR
[6]  
[Anonymous], ADV NEURAL INFORM PR
[7]  
[Anonymous], 2014, IJCV
[8]  
[Anonymous], CORR
[9]  
[Anonymous], 2017, TPAMI
[10]  
[Anonymous], 2017, CORR