AEGNN: Asynchronous Event-based Graph Neural Networks

被引:77
作者
Schaefer, Simon [1 ]
Gehrig, Daniel
Scaramuzza, Davide
机构
[1] Univ Zurich, Dept Informat, Zurich, Switzerland
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
瑞士国家科学基金会;
关键词
VISION;
D O I
10.1109/CVPR52688.2022.01205
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The best performing learning algorithms devised for event cameras work by first converting events into dense representations that are then processed using standard CNNs. However, these steps discard both the sparsity and high temporal resolution of events, leading to high computational burden and latency. For this reason, recent works have adopted Graph Neural Networks (GNNs), which process events as "static" spatio-temporal graphs, which are inherently "sparse". We take this trend one step further by introducing Asynchronous, Event-based Graph Neural Networks (AEGNNs), a novel event-processing paradigm that generalizes standard GNNs to process events as "evolving" spatio-temporal graphs. AEGNNs follow efficient update rules that restrict recomputation of network activations only to the nodes affected by each new event, thereby significantly reducing both computation and latency for eventby-event processing. AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time. We thoroughly validate our method on object classification and detection tasks, where we show an up to a 200-fold reduction in computational complexity (FLOPs), with similar or even better performance than state-of-the-art asynchronous methods. This reduction in computation directly translates to an 8fold reduction in computational latency when compared to standard GNNs, which opens the door to low-latency eventbased processing.
引用
收藏
页码:12361 / 12371
页数:11
相关论文
共 57 条
[1]   NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps [J].
Aimar, Alessandro ;
Mostafa, Hesham ;
Calabrese, Enrico ;
Rios-Navarro, Antonio ;
Tapiador-Morales, Ricardo ;
Lungu, Iulia-Alexandra ;
Milde, Moritz B. ;
Corradi, Federico ;
Linares-Barranco, Alejandro ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (03) :644-656
[2]  
Amir A., 2017, PROC CVPR IEEE, P7243, DOI [DOI 10.1109/CVPR.2017.781, 10.1109/CVPR.2017.781]
[3]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.102
[4]   Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing-Application to Feedforward ConvNets [J].
Antonio Perez-Carrasco, Jose ;
Zhao, Bo ;
Serrano, Carmen ;
Acha, Begona ;
Serrano-Gotarredona, Teresa ;
Chen, Shouchun ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2706-2719
[5]   Graph-Based Spatio-Temporal Feature Learning for Neuromorphic Vision Sensing [J].
Bi, Yin ;
Chadha, Aaron ;
Abbas, Alhabib ;
Bourtsoulatze, Eirina ;
Andreopoulos, Yiannis .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :9084-9098
[6]  
Bi Yin, 2019, IEEE INT C COMP VIS
[7]  
Boldi P., 2011, P 20 INT C WORLD WID, P625, DOI [DOI 10.1145/1963405.1963493, 10.1145/1963405.1963493]
[8]  
Cannici Marco, 2019, CVPRW
[9]  
de Tournemire Pierre, 2020, ABS200108499 ARXIV
[10]   Dynamic obstacle avoidance for quadrotors with event cameras [J].
Falanga, Davide ;
Kleber, Kevin ;
Scaramuzza, Davide .
SCIENCE ROBOTICS, 2020, 5 (40)