Deep Traffic Benchmark: Aerial Perception and Driven Behavior Dataset

被引:0
作者
Zhang, Guoxing [1 ]
Li, Qiuping [1 ]
Liu, Yiming [1 ]
Wang, Zhanpeng [1 ]
Chen, Yuanqi [1 ]
Cai, Wenrui [1 ]
Zhang, Weiye [1 ]
Guo, Bingting [1 ]
Zeng, Zhi [1 ]
Zhu, Jiasong [1 ]
机构
[1] Shenzhen Univ, Shenzhen, Guangdong, Peoples R China
来源
ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222 | 2023年 / 222卷
关键词
Drone; Driving Behavior; End-To-End Planning; Prediction trajectory of vehicle;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Predicting human driving behavior has always been an important area of autonomous driving research. Existing data on autonomous driving in this area is limited in both perspective and duration. For example, vehicles may block each other on the road, although data from vehicles behind them is useful for research. In addition, driving in this area is constrained by the road environment, and the host vehicle cannot observe the designated area for an extended period of time. To investigate the potential relationship between human driving behavior and traffic conditions, we provide a drone-collected video dataset, Deep Traffic, that includes: (1) aerial footage from a vertical perspective, (2) image and annotation capture for training vehicle destination detection and semantic segmentation model, (3) high-definition map data of the captured area, (4) development scripts for various features. Deep Traffic is the largest and most comprehensive dataset to date, covering both urban and high-speed areas. We believe that this benchmark dataset will greatly facilitate the development of drones to monitor traffic flow and study human driver behavior, and that the capacity of the traffic system is of great importance. All datasets and pre-training results can be downloaded from github project.
引用
收藏
页数:13
相关论文
共 14 条
  • [1] Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
  • [2] Large Scale Interactive Motion Forecasting for Autonomous Driving : The WAYMO OPEN MOTION DATASET
    Ettinger, Scott
    Cheng, Shuyang
    Caine, Benjamin
    Liu, Chenxi
    Zhao, Hang
    Pradhan, Sabeek
    Chai, Yuning
    Sapp, Ben
    Qi, Charles
    Zhou, Yin
    Yang, Zoey
    Chouard, Aurelien
    Sun, Pei
    Ngiam, Jiquan
    Vasudevan, Vijay
    McCauley, Alexander
    Shlens, Jonathon
    Anguelov, Dragomir
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9690 - 9699
  • [3] Vision meets robotics: The KITTI dataset
    Geiger, A.
    Lenz, P.
    Stiller, C.
    Urtasun, R.
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) : 1231 - 1237
  • [4] Hwang M., 2022, ARXIV
  • [5] Kim J.-H., 2023, ICASSP 2023 2023 IEE, P1
  • [6] Krajewski R, 2018, IEEE INT C INTELL TR, P2118, DOI 10.1109/ITSC.2018.8569552
  • [7] 1 year, 1000 km: The Oxford RobotCar dataset
    Maddern, Will
    Pascoe, Geoffrey
    Linegar, Chris
    Newman, Paul
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (01) : 3 - 15
  • [8] NGSIM. United states federal highway administration, 2016, next generation simulation (ngsim) vehicle trajectories and supporting data
  • [9] Human trajectory prediction in crowded scene using social-affinity Long Short-Term Memory
    Pei, Zhao
    Qi, Xiaoning
    Zhang, Yanning
    Ma, Miao
    Yang, Yee-Hong
    [J]. PATTERN RECOGNITION, 2019, 93 : 273 - 282
  • [10] Schweighofer G., 2008, BMVC