RGB-D Scene Labeling with Multimodal Recurrent Neural Networks

被引:3
作者
Fan, Heng [1 ]
Mei, Xue [2 ]
Prokhorov, Danil [2 ]
Ling, Haibin [1 ]
机构
[1] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
[2] Toyota Res Inst North Amer, Ann Arbor, MI USA
来源
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW) | 2017年
关键词
OBJECT RECOGNITION; FEATURES; TEXTURE; SCALE;
D O I
10.1109/CVPRW.2017.31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks (RNNs) are able to capture context in an image by modeling long-range semantic dependencies among image units. However, existing methods only utilize RNNs to model dependencies of a single modality (e.g., RGB) for labeling. In this work we extend this single-modal RNNs to multimodal RNNs (MM-RNNs) and apply it to RGB-D scene labeling. Our MM-RNNs are capable of seamlessly modeling dependencies of both RGB and depth modalities, and allow emory? sharing across modalities. By sharing emory? each modality possesses multiple properties of itself and other modalities, and becomes more discriminative to distinguish pixels. Moreover, we also analyse two simple extensions of single-modal RNNs and demonstrate that our MM-RNNs perform better than both of them. Integrating with convolutional neural networks (CNNs), we build an end-to-end network for RGB-D scene labeling. Extensive experiments on NYU depth V1 and V2 demonstrate the effectiveness of MM-RNNs.
引用
收藏
页码:203 / 211
页数:9
相关论文
共 41 条
  • [11] [Anonymous], 2014, ICRA
  • [12] [Anonymous], 2012, ECCV
  • [13] [Anonymous], 2015, CVPR
  • [14] [Anonymous], 2015, ICCV
  • [15] [Anonymous], 2015, P NIPS
  • [16] [Anonymous], 2012, CVPR
  • [17] [Anonymous], CVPR
  • [18] [Anonymous], 2014, ICML
  • [19] Byeon W., 2015, CVPR
  • [20] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893