Dilated Residual Networks

被引:853
作者
Yu, Fisher [1 ]
Koltun, Vladlen [2 ]
Funkhouser, Thomas [1 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Intel Labs, San Francisco, CA USA
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR.2017.75
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model's depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts ('degridding'), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
引用
收藏
页码:636 / 644
页数:9
相关论文
共 19 条
  • [1] [Anonymous], 2015, CVPR
  • [2] [Anonymous], 1989, NEURAL COMPUTATION
  • [3] [Anonymous], PAMI
  • [4] [Anonymous], COMPUTER VISION IMAG
  • [5] [Anonymous], 2016, ARXIV160600915
  • [6] [Anonymous], 2016, CVPR
  • [7] [Anonymous], 2015, IJCV
  • [8] [Anonymous], 2015, ICCV
  • [9] [Anonymous], ICCV
  • [10] [Anonymous], 2013, Some improvements on deep convolutional neural network based image classification