A Graph Convolutional Neural Network Based Approach for Traffic Monitoring Using Augmented Detections with Optical Flow

被引:11
作者
Papakis, Ioannis [1 ]
Sarkar, Abhijit [2 ]
Karpatne, Anuj [1 ]
机构
[1] Virginia Tech, Dept Comp Sci, Blacksburg, VA 24061 USA
[2] Virginia Tech, Transportat Inst, Blacksburg, VA 24061 USA
来源
2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC) | 2021年
关键词
D O I
10.1109/ITSC48978.2021.9564655
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a novel method for Multi-Object Tracking (MOT) using Graph Convolutional Neural Network based feature extraction and end-to-end feature matching for object association. The Graph based approach incorporates both appearance and geometry of objects at past frames as well as the current frame into the task of feature learning. This new paradigm enables the network to leverage the "context" information of the geometry of objects and allows us to model the interactions among the features of multiple objects. Another central innovation of the proposed framework is the use of the Sinkhorn algorithm for end-to-end learning of the associations among objects during model training. The network is trained to predict object associations by taking into account constraints specific to the MOT task. To increase the detector's sensitivity, a new approach is also presented that propagates previous frame detections into each new frame using optical flow. These are treated as added object proposals which are then classified as objects. A new traffic monitoring dataset is additionally provided, which includes naturalistic video footage from current infrastructure cameras in Virginia Beach City. Experimental evaluation demonstrates the efficacy of the proposed approaches on the provided dataset and the popular MOT Challenge Benchmark.
引用
收藏
页码:2980 / 2986
页数:7
相关论文
共 53 条
[41]  
Wang L, 2017, IEEE IMAGE PROC, P3630, DOI 10.1109/ICIP.2017.8296959
[42]   GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking with 2D-3D Multi-Feature Learning [J].
Weng, Xinshuo ;
Wang, Yongxin ;
Man, Yunze ;
Kitani, Kris M. .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6498-6507
[43]  
Weng Xinshuo, 2020, ARXIV200307847
[44]  
Wojke N, 2017, IEEE IMAGE PROC, P3645, DOI 10.1109/ICIP.2017.8296962
[45]   Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey [J].
Won, Myounggyu .
IEEE ACCESS, 2020, 8 :73340-73358
[46]   Modified Anti-windup Control for High Precision Motion System [J].
Wu, Zhipeng ;
Ding, Minxia ;
Li, Jing .
IWAPS 2020: PROCEEDINGS OF 2020 4TH INTERNATIONAL WORKSHOP ON ADVANCED PATTERNING SOLUTIONS (IWAPS), 2020, :71-74
[47]   Evaluation of Talents' Scientific Research Capability based on Rough Set Fuzzy Clustering Algorithm [J].
Xia, Yan ;
Wu, Xinlin ;
Feng, Hui .
PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED EDUCATION (CSEDU), VOL 2, 2017, :359-366
[48]   Spatial-Temporal Relation Networks for Multi-Object Tracking [J].
Xu, Jiarui ;
Cao, Yue ;
Zhang, Zheng ;
Hu, Han .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3987-3997
[49]   How To Train Your Deep Multi-Object Tracker [J].
Xu, Yihong ;
Sep, Aljosa ;
Ban, Yutong ;
Horaud, Radu ;
Leal-Taixe, Laura ;
Alameda-Pineda, Xavier .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6786-6795
[50]   Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers [J].
Yang, Fan ;
Choi, Wongun ;
Lin, Yuanqing .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2129-2137