A Spatial-Motion-Segmentation Algorithm by Fusing EDPA and Motion Compensation

被引:2
作者
Liu, Xinghua [1 ]
Zhao, Yunan [1 ]
Yang, Lei [1 ]
Ge, Shuzhi Sam [2 ]
机构
[1] Xian Univ Technol, Sch Elect Engn, Xian 710048, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 119077, Singapore
基金
中国国家自然科学基金;
关键词
event camera; motion segmentation; motion compensation; depth estimation; motion flow; RECOGNITION; NETWORK;
D O I
10.3390/s22186732
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Motion segmentation is one of the fundamental steps for detection, tracking, and recognition, and it can separate moving objects from the background. In this paper, we propose a spatial-motion-segmentation algorithm by fusing the events-dimensionality-preprocessing algorithm (EDPA) and the volume of warped events (VWE). The EDPA consists of depth estimation, linear interpolation, and coordinate normalization to obtain an extra dimension (Z) of events. The VWE is conducted by accumulating the warped events (i.e., motion compensation), and the iterative-clustering algorithm is introduced to maximize the contrast (i.e., variance) in the VWE. We established our datasets by utilizing the event-camera simulator (ESIM), which can simulate high-frame-rate videos that are decomposed into frames to generate a large amount of reliable events data. Exterior and interior scenes were segmented in the first part of the experiments. We present the sparrow search algorithm-based gradient ascent (SSA-Gradient Ascent). The SSA-Gradient Ascent, gradient ascent, and particle swarm optimization (PSO) were evaluated in the second part. In Motion Flow 1, the SSA-Gradient Ascent was 0.402% higher than the basic variance value, and 52.941% faster than the basic convergence rate. In Motion Flow 2, the SSA-Gradient Ascent still performed better than the others. The experimental results validate the feasibility of the proposed algorithm.
引用
收藏
页数:15
相关论文
共 39 条
[1]  
Alhashim I, 2019, Arxiv, DOI arXiv:1812.11941
[2]   Moving Object Detection Based on Fusion of Depth Information and RGB Features [J].
Bi, Xin ;
Yang, Shichao ;
Tong, Panpan .
SENSORS, 2022, 22 (13)
[3]   Motion segmentation and pose recognition with motion history gradients [J].
Bradski, GR ;
Davis, JW .
MACHINE VISION AND APPLICATIONS, 2002, 13 (03) :174-184
[4]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[5]   RDRL: A Recurrent Deep Reinforcement Learning Scheme for Dynamic Spectrum Access in Reconfigurable Wireless Networks [J].
Chen, Miaojiang ;
Liu, Anfeng ;
Liu, Wei ;
Ota, Kaoru ;
Dong, Mianxiong ;
Xiong, N. Neal .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (02) :364-376
[6]   A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems [J].
Chen, Miaojiang ;
Liu, Wei ;
Wang, Tian ;
Zhang, Shaobo ;
Liu, Anfeng .
KNOWLEDGE-BASED SYSTEMS, 2022, 235
[7]   A Joint Approach to Global Motion Estimation and Motion Segmentation from a Coarsely Sampled Motion Vector Field [J].
Chen, Yue-Meng ;
Bajic, Ivan V. .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (09) :1316-1328
[8]   An Asynchronous Real-Time Corner Extraction and Tracking Algorithm for Event Camera [J].
Duo, Jingyun ;
Zhao, Long .
SENSORS, 2021, 21 (04) :1-15
[9]  
Eigen D, 2014, ADV NEUR IN, V27
[10]   3-D Mapping With an RGB-D Camera [J].
Endres, Felix ;
Hess, Juergen ;
Sturm, Juergen ;
Cremers, Daniel ;
Burgard, Wolfram .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (01) :177-187