Depth-Adaptive Deep Neural Network for Semantic Segmentation

被引:59
作者
Kang, Byeongkeun [1 ]
Lee, Yeejin [1 ]
Nguyen, Truong Q. [1 ]
机构
[1] Univ Calif San Diego, Dept Elect & Comp Engn, La Jolla, CA 92093 USA
基金
美国国家科学基金会;
关键词
Semantic segmentation; convolutional neural networks; deep learning;
D O I
10.1109/TMM.2018.2798282
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we present the depth-adaptive deep neural network using a depth map for semantic segmentation. Typical deep neural networks receive inputs at the predetermined locations regardless of the distance from the camera. This fixed receptive field presents a challenge to generalize the features of objects at various distances in neural networks. Specifically, the predetermined receptive fields are too small at a short distance, and vice versa. To overcome this challenge, we develop a neural network that is able to adapt the receptive field not only for each layer hut also for each neuron at the spatial location. To adjust the receptive field, we propose the depth-adaptive multiscale (DaM) convolution layer consisting of the adaptive perception neuron and the in-layer multiscale neuron. The adaptive perception neuron is to adjust the receptive field at each spatial location using the corresponding depth information. The in-layer multiscale neuron is to apply the different size of the receptive field at each feature space to learn features at multiple scales. The proposed DaM convolution is applied to two fully convolutional neural networks. We demonstrate the effectiveness of the proposed neural networks on the publicly available RGB-D dataset for semantic segmentation and the novel hand segmentation dataset for hand-object interaction. The experimental results show that the proposed method outperforms the state-of-the-art methods without any additional layers or preprocessing/postprocessing.
引用
收藏
页码:2478 / 2490
页数:13
相关论文
共 45 条
[1]   The interpretation of phase and intensity data from AMCW light detection sensors for reliable ranging [J].
Adams, MD ;
Probert, PJ .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 1996, 15 (05) :441-458
[2]  
[Anonymous], 2016, Proceedings of the 24th ACM international conference on Multimedia
[3]  
[Anonymous], P BRIT MACH VIS C YO
[4]  
Apostol T.M., 1974, Addison-Wesley series in mathematics, Vsecond
[5]  
Bishop Christopher M, 2016, Pattern recognition and machine learning
[6]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[7]   The Cityscapes Dataset for Semantic Urban Scene Understanding [J].
Cordts, Marius ;
Omran, Mohamed ;
Ramos, Sebastian ;
Rehfeld, Timo ;
Enzweiler, Markus ;
Benenson, Rodrigo ;
Franke, Uwe ;
Roth, Stefan ;
Schiele, Bernt .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3213-3223
[8]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[9]   Robust 3D Hand Pose Estimation in Single Depth Images: from Single-View CNN to Multi-View CNNs [J].
Ge, Liuhao ;
Liang, Hui ;
Yuan, Junsong ;
Thalmann, Daniel .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3593-3601
[10]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237