A Dynamic Vision Sensor Sample Set Modeling Method Based on Frame Images

被引:0
作者
Lu X.-P. [1 ,4 ]
Wang M.-Y. [1 ,2 ]
Cao Y. [3 ]
Zhao R.-L. [4 ]
Zhou W. [3 ]
Li Z.-L. [1 ]
Wei S.-J. [2 ]
机构
[1] Department of Computer Science and Technology, Tsinghua University, Beijing
[2] Institute of Microelectronics, Tsinghua University, Beijing
[3] Beijing Aerospace Chenxin Science and Technology Ltd., Beijing
[4] School of Information Science and Technology, Beijing University of Chemical Technology, Beijing
来源
Wang, Ming-Yu (mywang@mail.tsinghua.edu.cn) | 1600年 / Chinese Institute of Electronics卷 / 48期
关键词
Dynamic vision sensor; Event-driven; Memory optimization; Sample set modeling;
D O I
10.3969/j.issn.0372-2112.2020.08.001
中图分类号
学科分类号
摘要
Dynamic vision sensor (DVS)shows significant advantages on low computational latency, low memory usage and high dynamic range by utilizing the event-driven principle to extract features from moving objects.Current research shows that DVS-based neural networks improve object detection speed obviously.However, the sample sets required by such neural networks mainly rely on specific DVS cameras while lacking efficient generation methods for the sample sets.It limits the application and development of those neural networks.According to the principle of DVS, this paper presents a DVS sample set modeling method based on frame images, in which the sample set can be generated by encoding and normalizing the address-event (AE)data after being trigged by dynamic differential comparisons and logical judgments.The experimental results for modeling the MNIST and CIFAR-10 sample sets show that, the sample set modeled by the proposed method is basically matched with the real DVS cameras.Compared with traditional frame image sample sets, this method can significantly reduce the memory usage.The sample set generated by the proposed modeling method has also been verified by training and testing a typical convolutional neural network. © 2020, Chinese Institute of Electronics. All right reserved.
引用
收藏
页码:1457 / 1464
页数:7
相关论文
共 21 条
  • [1] SZEGEDY C, LIU W, JIA Y, Et al., Going deeper with convolutional, Proceeding of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, (2015)
  • [2] HE K, ZHANG X, REN S, Et al., Deep residual learning for image recognition, Proceeding of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, (2016)
  • [3] REDMON J, DIVVALA S, GIRSHICK R, Et al., You only look once: unified, real-time object detection, Proceeding of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, (2016)
  • [4] LIU W, ANGUELOV D, ERHAN D, Et al., SSD: single shot multibox detector, Proceeding of the 2016 European Conference on Computer Vision, pp. 21-37, (2016)
  • [5] DELBRUCK T., Frame-free dynamic digital vision, Proceeding of Intl Symp on Source-Life Electronics, Advanced Electronics for Quality Life and Society, pp. 21-26, (2008)
  • [6] LUNGU L-A, CORRADI F, DELBRUCK T., Live demonstration: convolutional neural network driven by dynamic vision sensor playing RoShamBo, Proceeding of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-1, (2017)
  • [7] MOEYS D P, CORRADI F, KERR E, Et al., Steering a predator robot using a mixed frame/event-driven convolutional neural network, Proceeding of the 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP), pp. 1-8, (2016)
  • [8] SIVILOTTI M A., Wiring considerations in Analog VLSI systems, with application to field-programmable networks, (1991)
  • [9] MAHOWALD M., VLSI analogs of neuronal visual processing: a synthesis of form and function, (1992)
  • [10] YANG M, LIU S-C, DELBRUCK T., A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding, IEEE Journal of Solid-State Circuits, 50, 9, pp. 2149-2160, (2015)