Adaptive Unsupervised Learning-Based 3D Spatiotemporal Filter for Event-Driven Cameras

被引:1
作者
Ben Miled, Meriem [1 ]
Liu, Wenwen [2 ]
Liu, Yuanchang [1 ]
机构
[1] UCL, Dept Mech Engn, London, England
[2] Nanjing Univ Informat, Sch Automat Sci & Technol, Nanjing, Peoples R China
关键词
Bioinformatics - Energy efficiency - Frequency domain analysis - Learning algorithms - Lighting - Machine learning - Population statistics - Robots - Spectral density;
D O I
10.34133/research.0330
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the evolving landscape of robotics and visual navigation, event cameras have gained important traction, notably for their exceptional dynamic range, efficient power consumption, and low latency. Despite these advantages, conventional processing methods oversimplify the data into 2 dimensions, neglecting critical temporal information. To overcome this limitation, we propose a novel method that treats events as 3D time-discrete signals. Drawing inspiration from the intricate biological filtering systems inherent to the human visual apparatus, we have developed a 3D spatiotemporal filter based on unsupervised machine learning algorithm. This filter effectively reduces noise levels and performs data size reduction, with its parameters being dynamically adjusted based on population activity. This ensures adaptability and precision under various conditions, like changes in motion velocity and ambient lighting. In our novel validation approach, we first identify the noise type and determine its power spectral density in the event stream. We then apply a one-dimensional discrete fast Fourier transform to assess the filtered event data within the frequency domain, ensuring that the targeted noise frequencies are adequately reduced. Our research also delved into the impact of indoor lighting on event stream noise. Remarkably, our method led to a 37% decrease in the data point cloud, improving data quality in diverse outdoor settings.
引用
收藏
页数:17
相关论文
共 37 条
[1]   An experimental technique for estimating the ESR and reactance intrinsic values of aluminum electrolytic capacitors [J].
Amaral, Acacio M. R. ;
Cardoso, A. J. Marques .
2006 IEEE INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE PROCEEDINGS, VOLS 1-5, 2006, :1820-+
[2]   Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras [J].
Baldwin, R. Wes ;
Almatrafi, Mohammed ;
Asari, Vijayan ;
Hirakawa, Keigo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1698-1707
[3]   Understanding Human Reactions Looking at Facial Microexpressions With an Event Camera [J].
Becattini, Federico ;
Palai, Federico ;
Del Bimbo, Alberto .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (12) :9112-9121
[4]  
Beenakker CWJ, 2006, arXiv, DOI [10.48550/arXiv.condmat/0605025, DOI 10.48550/ARXIV.CONDMAT/0605025]
[5]  
Ben Miled M, 2022, UKRAS22 C ROB UNC EN, P34
[6]   The role of the magnocellular pathway in serial deployment of visual attention [J].
Cheng, A ;
Eysel, UT ;
Vidyasagar, TR .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2004, 20 (08) :2188-2192
[7]   E-MLB: Multilevel Benchmark for Event-Based Camera Denoising [J].
Ding, Saizhe ;
Chen, Jinze ;
Wang, Yang ;
Kang, Yu ;
Song, Weiguo ;
Cheng, Jie ;
Cao, Yang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 :65-76
[8]   EventZoom: Learning to Denoise and Super Resolve Neuromorphic Events [J].
Duan, Peiqi ;
Wang, Zihao W. ;
Zhou, Xinyu ;
Ma, Yi ;
Shi, Boxin .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12819-12828
[9]   Dynamic obstacle avoidance for quadrotors with event cameras [J].
Falanga, Davide ;
Kleber, Kevin ;
Scaramuzza, Davide .
SCIENCE ROBOTICS, 2020, 5 (40)
[10]   Event Density Based Denoising Method for Dynamic Vision Sensor [J].
Feng, Yang ;
Lv, Hengyi ;
Liu, Hailong ;
Zhang, Yisa ;
Xiao, Yuyao ;
Han, Chengshan .
APPLIED SCIENCES-BASEL, 2020, 10 (06)