Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking

被引:22
作者
Chen, Haosheng [1 ]
Wu, Qiangqiang [1 ]
Liang, Yanjie [1 ]
Gao, Xinbo [2 ]
Wang, Hanzi [1 ]
机构
[1] Xiamen Univ, Sch Informat, Fujian Key Lab Sensing & Comp Smart City, Xiamen, Peoples R China
[2] Xidian Univ, Xian, Peoples R China
来源
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19) | 2019年
基金
中国国家自然科学基金;
关键词
Event-based Object Tracking; Event-based Object Detection; Event Camera; Adaptive Time Surface; MOTION; VISION;
D O I
10.1145/3343031.3350975
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Event cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in a variety of situations, such as fast motion and low illumination scenes. However, most of the event-based object tracking methods are designed for scenarios with untextured objects and uncluttered backgrounds. There are few event-based object tracking methods that support bounding box-based object tracking. The main idea behind this work is to propose an asynchronous Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking. To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (AT-SLTD) event-to-frame conversion algorithm, which asynchronously and effectively warps the spatio-temporal information of asynchronous retinal events to a sequence of ATSLTD frames with clear object contours. We feed the sequence of ATSLTD frames to the proposed ETD method to perform accurate and efficient object tracking, which leverages the high temporal resolution property of event cameras. We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD. The experimental results show the superiority of the proposed ETD method in handling various challenging environments.
引用
收藏
页码:473 / 481
页数:9
相关论文
共 43 条
[1]  
[Anonymous], 2018, P EUR C COMP VIS
[2]  
[Anonymous], CVPR
[3]  
[Anonymous], 2018, BMVC
[4]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.102
[5]   GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES [J].
BALLARD, DH .
PATTERN RECOGNITION, 1981, 13 (02) :111-122
[6]  
Barranco F, 2018, IEEE INT C INT ROBOT, P5764, DOI 10.1109/IROS.2018.8593380
[7]   Fully-Convolutional Siamese Networks for Object Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Henriques, Joao F. ;
Vedaldi, Andrea ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 :850-865
[8]  
Bochinski E, 2017, 2017 14TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS)
[9]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[10]   Event-Driven Stereo Visual Tracking Algorithm to Solve Object Occlusion [J].
Camunas-Mesa, Luis A. ;
Serrano-Gotarredona, Teresa ;
Ieng, Sio-Hoi ;
Benosman, Ryad ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (09) :4223-4237