Deep Learning-Based Computer Vision Methods for Complex Traffic Environments Perception: A Review

被引:7
作者
Talha Azfar
Jinlong Li
Hongkai Yu
Ruey L. Cheu
Yisheng Lv
Ruimin Ke
机构
[1] Rensselaer Polytechnic Institute, Troy, 12180, NY
[2] Cleveland State University, Cleveland, 44115, OH
[3] The University of Texas at El Paso, El Paso, 79968, TX
[4] Institute of Automation, Chinese Academy of Sciences, Beijing
来源
Data Science for Transportation | 2024年 / 6卷 / 1期
基金
美国国家科学基金会;
关键词
Autonomous driving; Complex traffic environment; Computer vision; Deep learning; Intelligent Transportation systems;
D O I
10.1007/s42421-023-00086-7
中图分类号
学科分类号
摘要
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real-world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, and shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2024.
引用
收藏
相关论文
共 173 条
[1]  
Aboah A., Shoman M., Mandal V., Davami S., Adu-Gyamfi Y., Sharma A., A vision-based system for traffic anomaly detection using deep learning and decision trees, (2021)
[2]  
Aboah A., Boeding M., Adu-Gyamfi Y., Mobile sensing for multipurpose applications in transportation, J Big Data Analyt Transp, 4, 2-3, pp. 171-183, (2022)
[3]  
Aflalo E., Du M., Tseng S.-Y., Liu Y., Wu C., Duan N., Lal V., Vl-interpret: an interactive visualization tool for interpreting vision-language transformers, Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp. 21406-21415, (2022)
[4]  
Albiol A., Albiol A., Mossi J.M., Video-based traffic queue length estimation, pp. 1928-1932, (2011)
[5]  
Amini A., Gilitschenski I., Phillips J., Moseyko J., Banerjee R., Karaman S., Rus D., Learning robust control policies for end-to-end autonomous driving from data-driven simulation, IEEE Robot Autom Lett, 5, 2, pp. 1143-1150, (2020)
[6]  
Amini A., Wang T.-H., Gilitschenski I., Schwarting W., Liu Z., Han S., Karaman S., Rus D., VISTA 2.0: An Open, Data-Driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles, (2021)
[7]  
Anastasiu D.C., Gaul J., Vazhaeparambil M., Gaba M., Sharma P., Efficient city-wide multi-class multi-movement vehicle counting: a survey, J Big Data Analyt Transp, 2, pp. 235-250, (2020)
[8]  
Arabi S., Haghighat A., Sharma A., A deep-learning-based computer vision solution for construction vehicle detection, Comput-Aided Civ Infrastruct Eng, 35, 7, pp. 753-767, (2020)
[9]  
Atakishiyev S., Salameh M., Yao H., Goebel R., Towards Safe, Explainable, and Regulated Autonomous Driving, (2021)
[10]  
Azfar T., Weidner J., Raheem A., Ke R., Cheu R.L., Efficient procedure of building university campus models for digital twin simulation, IEEE J Radio Freq Identif, 6, pp. 769-773, (2022)