Trajectory tracking method based on Bayesian classifier for pulse array image sensor

被引:0
作者
Zhang, Peiwen [1 ,2 ]
Xu, Jiangtao [1 ,2 ]
Gao, Zhiyuan [1 ,2 ]
Nie, Kaiming [1 ,2 ]
Gao, Jing [1 ,2 ]
机构
[1] Tianjin Univ, Sch Microelect, Tianjin, Peoples R China
[2] Tianjin Key Lab Imaging & Sensing Microelect Tech, Tianjin, Peoples R China
基金
中国国家自然科学基金;
关键词
tracking; pulse stream; sparse data; Bayesian classifier; VISION SENSOR;
D O I
10.1117/1.OE.62.3.033106
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
A pulse array image sensor (PAIS) is a bionic image sensor that converts light intensity into a pulse sequence to reduce the data volume. The traditional tracking method is not suitable for the sparse data form because of the absence of grayscale information. A trajectory tracking method based on a Bayesian classifier is proposed to maximize the property of the pulse data. First, candidate points with the smallest interval distances are selected in the area of interest. Then the total pulse numbers in a specified period at different positions are gathered to compose the positive and negative feature vectors, which are used to train a naive Bayesian classifier. The classifier can exactly obtain positions from the candidate points, and features in the new tracking positions train and update the classifier. The two-step filtering target point tracking algorithm using the interval distance and Bayesian classifier requires only the raw pulse data without processing, which can maximize the advantages of the special data format and improve computing efficiency. In this way, the position information can be directly obtained from pulse data, and the problem of wasting computing resources on reconstructing grayscale images and treating them in the traditional way is avoided. Experiments were performed on both real filmed data and the public event camera datasets. Our method can obtain trajectories with a high accuracy and long tracking time. The tracking errors are in single digits. In the comparison experiments, although our method has a smaller data volume than other models that obtain both frame and event data, the results still show that it has a comparable performance to the state-of-the-art methods. (c) 2023 Society of Photo-Optical Instrumentation Engineers (SPIE)
引用
收藏
页数:12
相关论文
共 39 条
  • [1] A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor
    Brandli, Christian
    Berner, Raphael
    Yang, Minhao
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) : 2333 - 2341
  • [3] Cen MB, 2018, IEEE IMAGE PROC, P3718, DOI 10.1109/ICIP.2018.8451102
  • [4] Chae Y., 2010, IEEE International Solid-State Circuits Conference (ISSCC), P394, DOI [10.1109/ISSCC.2010.5433974, DOI 10.1109/ISSCC.2010.5433974]
  • [5] Comaniciu D, 2000, PROC CVPR IEEE, P142, DOI 10.1109/CVPR.2000.854761
  • [6] High-Performance Long-Term Tracking with Meta-Updater
    Dai, Kenan
    Zhang, Yunhua
    Wang, Dong
    Li, Jianhua
    Lu, Huchuan
    Yang, Xiaoyun
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6297 - 6306
  • [7] The Analysis and Suppressing of Non-Uniformity in a High-Speed Spike-Based Image Sensor
    Gao, Jing
    Wang, Yanzhao
    Nie, Kaiming
    Gao, Zhiyuan
    Xu, Jiangtao
    [J]. SENSORS, 2018, 18 (12)
  • [8] Asynchronous, Photometric Feature Tracking Using Events and Frames
    Gehrig, Daniel
    Rebecq, Henri
    Gallego, Guillermo
    Scaramuzza, Davide
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 766 - 781
  • [9] Gehrig D, 2020, INT J COMPUT VISION, V128, P601, DOI 10.1007/s11263-019-01209-w
  • [10] Gui Quanan, 2020, 2020 International Conference on Advance in Ambient Computing and Intelligence (ICAACI), P26, DOI 10.1109/ICAACI50733.2020.00010