Learning to Zoom: A Saliency-Based Sampling Layer for Neural Networks

被引:100
作者
Recasens, Adria [1 ]
Kellnhofer, Petr [1 ]
Stent, Simon [2 ]
Matusik, Wojciech [1 ]
Torralba, Antonio [1 ]
机构
[1] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] Toyota Res Inst, Cambridge, MA 02139 USA
来源
COMPUTER VISION - ECCV 2018, PT IX | 2018年 / 11213卷
关键词
Task saliency; Image sampling; Attention; Spatial transformer; Convolutional neural networks; Deep learning;
D O I
10.1007/978-3-030-01240-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a saliency-based distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task. Our differentiable layer can be added as a preprocessing block to existing task networks and trained altogether in an end-to-end fashion. The effect of the layer is to efficiently estimate how to sample from the original data in order to boost task performance. For example, for an image classification task in which the original data might range in size up to several megapixels, but where the desired input images to the task network are much smaller, our layer learns how best to sample from the underlying high resolution data in a manner which preserves task-relevant information better than uniform downsampling. This has the effect of creating distorted, caricature-like intermediate images, in which idiosyncratic elements of the image that improve task performance are zoomed and exaggerated. Unlike alternative approaches such as spatial transformer networks, our proposed layer is inspired by image saliency, computed efficiently from uniformly downsampled data, and degrades gracefully to a uniform sampling strategy under uncertainty. We apply our layer to improve existing networks for the tasks of human gaze estimation and fine-grained object classification. Code for our method is available in: http://github.com/recasens/Saliency-Sampler.
引用
收藏
页码:52 / 67
页数:16
相关论文
共 33 条
[1]  
[Anonymous], 2014, arXiv
[2]  
[Anonymous], 2011, Technical Report CNS-TR-2011-001
[3]  
[Anonymous], 2017, ARXIV170310332
[4]  
[Anonymous], 2007, 2007 IEEE 11 INT C C, DOI DOI 10.1109/ICCV.2007.4409010
[5]  
[Anonymous], 2014, Neural Information Processing Systems
[6]  
Chen R.C., 2010, 2010 IEEE International Conference on Fuzzy Systems, P1
[7]   Caching Incentive Design in Wireless D2D Networks: A Stackelberg Game Approach [J].
Chen, Zhuoqun ;
Liu, Yangyang ;
Zhou, Bo ;
Tao, Meixia .
2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2016,
[8]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Eslami SMA, 2016, 30 C NEURAL INFORM P, V29