MotionRFCN: Motion Segmentation Using Consecutive Dense Depth Maps

被引:1
|
作者
Liu, Yiling [1 ]
Wang, Hesheng [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Automat, Shanghai, Peoples R China
来源
PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II | 2019年 / 11671卷
关键词
Deep learning; Motion segmentation; Autonomous driving; VISION;
D O I
10.1007/978-3-030-29911-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is important to enable autonomous robots to detect or segment moving objects in dynamic scenes as they must perform collision-free navigation. Motion segmentation from a moving platform is challenging due to the dual motion caused by the background and the moving objects. Existing approaches for motion segmentation either have long multistage pipelines which are inefficient for real-time application or utilize optical flow which is sensitive to environment. In this paper, this challenging task is tackled by constructing spatiotemporal features from two consecutive dense depth maps. Depth maps can be generated either by LiDaR scans data or stereo vision algorithms. The core of the proposed approach is a fully convolutional network with inserted Gated-Recurrent-Units, denoted as MotionRFCN. We also create a publicly available dataset (KITTI-MoSeg) which contains more than 2000 frames with motion annotations. Qualitative and quantitative evaluation of MotionRFCN are presented to demonstrate its state-of-the-art performance on the KITTI dataset. The basic MotionRFCN can run in real time and segment moving objects whether the platform is stationary or moving. To the best of our knowledge, the proposed method is the first to implement motion segmentation with only dense depth maps inputs.
引用
收藏
页码:510 / 522
页数:13
相关论文
共 50 条
  • [1] Detecting Chest Compression Depth Using a Smartphone Camera and Motion Segmentation
    Meinich-Bache, Oyvind
    Engan, Kjersti
    Eftestol, Trygve
    Austvoll, Ivar
    IMAGE ANALYSIS, SCIA 2017, PT II, 2017, 10270 : 53 - 64
  • [2] A simple framework for spatio-temporal video segmentation and delayering using dense motion fields
    Piroddi, R
    Vlachos, T
    IEEE SIGNAL PROCESSING LETTERS, 2006, 13 (07) : 421 - 424
  • [3] Integrating feature maps and competitive layer architectures for motion segmentation
    Steffen, Jan
    Pardowitz, Michael
    Steil, Jochen J.
    Ritter, Helge
    NEUROCOMPUTING, 2011, 74 (09) : 1372 - 1381
  • [4] SGTBN: Generating Dense Depth Maps From Single-Line LiDAR
    Lu, Hengjie
    Xu, Shugong
    Cao, Shan
    IEEE SENSORS JOURNAL, 2021, 21 (17) : 19091 - 19100
  • [5] Integrated segmentation and depth ordering of motion layers in image sequences
    Tweed, DS
    Calway, AD
    IMAGE AND VISION COMPUTING, 2002, 20 (9-10) : 709 - 723
  • [6] Unsupervised Ego-Motion and Dense Depth Estimation with Monocular Video
    Xu, Yufan
    Wang, Yan
    Guo, Lei
    2018 IEEE 18TH INTERNATIONAL CONFERENCE ON COMMUNICATION TECHNOLOGY (ICCT), 2018, : 1306 - 1310
  • [7] Enhanced Depth Motion Maps for Improved Human Action Recognition from Depth Action Sequences
    Rao, Dustakar Surendra
    Rao, L. Koteswara
    Bhagyaraju, Vipparthi
    Meng, Goh Kam
    TRAITEMENT DU SIGNAL, 2024, 41 (03) : 1461 - 1472
  • [8] PARALLAX MOTION EFFECT GENERATION THROUGH INSTANCE SEGMENTATION AND DEPTH ESTIMATION
    Pinto, Allan
    Cordova, Manuel A.
    Decker, Luis G. L.
    Flores-Campana, Jose L.
    Souza, Marcos R.
    dos Santos, Andreza A.
    Conceicao, Jhonatas S.
    Gagliardi, Henrique F.
    Luvizon, Diogo C.
    Torres, Ricardo da S.
    Pedrini, Helio
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1621 - 1625
  • [9] A Feature-based Approach for Dense Segmentation and Estimation of Large Disparity Motion
    Josh Wills
    Sameer Agarwal
    Serge Belongie
    International Journal of Computer Vision, 2006, 68 : 125 - 143
  • [10] A feature-based approach for dense segmentation and estimation of large disparity motion
    Wills, Josh
    Agarwal, Sameer
    Belongie, Serge
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2006, 68 (02) : 125 - 143