Detail-Preserving Pooling in Deep Networks

被引:86
作者
Saeedan, Faraz [1 ]
Weber, Nicolas [1 ,2 ]
Goesele, Michael [1 ,3 ]
Roth, Stefan [1 ]
机构
[1] Tech Univ Darmstadt, Darmstadt, Germany
[2] NEC Labs Europe, Heidelberg, Germany
[3] Oculus Res, Garner, NC USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00949
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.
引用
收藏
页码:9108 / 9116
页数:9
相关论文
共 34 条
[1]  
[Anonymous], 2014, ICLR
[2]  
[Anonymous], 2013, P MACHINE LEARNING R
[3]  
[Anonymous], NIPS 2007
[4]  
[Anonymous], NIPS 2015
[5]  
[Anonymous], 2014, PR MACH LEARN RES
[6]  
[Anonymous], 2017, CVPR
[7]  
[Anonymous], 2016, ACM T GRAPHIC
[8]  
[Anonymous], NIPS 1989
[9]  
[Anonymous], NIPS 2016
[10]  
[Anonymous], 2017, CVPR