Adaptive Optimization and Dynamic Representation Method for Asynchronous Data Based on Regional Correlation Degree

被引:0
作者
Tang, Sichao [1 ,2 ]
Zhao, Yuchen [1 ]
Lv, Hengyi [1 ]
Sun, Ming [1 ]
Feng, Yang [1 ]
Zhang, Zeshu [1 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun 130033, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
关键词
event cameras; slicing methods; event representations; VISION; MOTION; OBJECT; SPACE; FRAMEWORK; SENSOR;
D O I
10.3390/s24237430
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Event cameras, as bio-inspired visual sensors, offer significant advantages in their high dynamic range and high temporal resolution for visual tasks. These capabilities enable efficient and reliable motion estimation even in the most complex scenes. However, these advantages come with certain trade-offs. For instance, current event-based vision sensors have low spatial resolution, and the process of event representation can result in varying degrees of data redundancy and incompleteness. Additionally, due to the inherent characteristics of event stream data, they cannot be utilized directly; pre-processing steps such as slicing and frame compression are required. Currently, various pre-processing algorithms exist for slicing and compressing event streams. However, these methods fall short when dealing with multiple subjects moving at different and varying speeds within the event stream, potentially exacerbating the inherent deficiencies of the event information flow. To address this longstanding issue, we propose a novel and efficient Asynchronous Spike Dynamic Metric and Slicing algorithm (ASDMS). ASDMS adaptively segments the event stream into fragments of varying lengths based on the spatiotemporal structure and polarity attributes of the events. Moreover, we introduce a new Adaptive Spatiotemporal Subject Surface Compensation algorithm (ASSSC). ASSSC compensates for missing motion information in the event stream and removes redundant information, thereby achieving better performance and effectiveness in event stream segmentation compared to existing event representation algorithms. Additionally, after compressing the processed results into frame images, the imaging quality is significantly improved. Finally, we propose a new evaluation metric, the Actual Performance Efficiency Discrepancy (APED), which combines actual distortion rate and event information entropy to quantify and compare the effectiveness of our method against other existing event representation methods. The final experimental results demonstrate that our event representation method outperforms existing approaches and addresses the shortcomings of current methods in handling event streams with multiple entities moving at varying speeds simultaneously.
引用
收藏
页数:24
相关论文
共 55 条
[1]   EV-SegNet: Semantic Segmentation for Event-based Cameras [J].
Alonso, Inigo ;
Murillo, Ana C. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, :1624-1633
[2]   Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing-Application to Feedforward ConvNets [J].
Antonio Perez-Carrasco, Jose ;
Zhao, Bo ;
Serrano, Carmen ;
Acha, Begona ;
Serrano-Gotarredona, Teresa ;
Chen, Shouchun ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2706-2719
[3]   Time-Ordered Recent Event (TORE) Volumes for Event Cameras [J].
Baldwin, R. Wes ;
Liu, Ruixu ;
Almatrafi, Mohammed ;
Asari, Vijayan ;
Hirakawa, Keigo .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) :2519-2532
[4]   Graph-Based Spatio-Temporal Feature Learning for Neuromorphic Vision Sensing [J].
Bi, Yin ;
Chadha, Aaron ;
Abbas, Alhabib ;
Bourtsoulatze, Eirina ;
Andreopoulos, Yiannis .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :9084-9098
[5]   Graph-Based Object Classification for Neuromorphic Vision Sensing [J].
Bi, Yin ;
Chadha, Aaron ;
Abbas, Alhabib ;
Bourtsoulatze, Eirina ;
Andreopoulos, Yiannis .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :491-501
[6]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[7]   Neural Decoding with Kernel-Based Metric Learning [J].
Brockmeier, Austin J. ;
Choi, John S. ;
Kriminger, Evan G. ;
Francis, Joseph T. ;
Principe, Jose C. .
NEURAL COMPUTATION, 2014, 26 (06) :1080-1107
[8]   Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking [J].
Chen, Haosheng ;
Wu, Qiangqiang ;
Liang, Yanjie ;
Gao, Xinbo ;
Wang, Hanzi .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :473-481
[9]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[10]   A Voxel Graph CNN for Object Classification with Event Cameras [J].
Deng, Yongjian ;
Chen, Hao ;
Liu, Hai ;
Li, Youfu .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :1162-1171