Deep Learning Approaches for Vehicle and Pedestrian Detection in Adverse Weather

被引:0
作者
Zaman, Mostafa [1 ]
Saha, Sujay [2 ]
Zohrabi, Nasibeh [3 ]
Abdelwahed, Sherif [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Elect & Comp Engn, Richmond, VA 23220 USA
[2] Univ Dhaka, Dept Elect & Elect Engn, Dhaka, Bangladesh
[3] Penn State Univ, Dept Engn, Media, PA USA
来源
2023 IEEE TRANSPORTATION ELECTRIFICATION CONFERENCE & EXPO, ITEC | 2023年
关键词
Vehicle Detection; Pedestrian Detection; Deep Learning; Adverse Nature; DAWN Dataset; Intelligent Transportation System; YOLO; F-RCNN;
D O I
10.1109/ITEC55900.2023.10187020
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vision-based vehicle and pedestrian identification methods to enhance road safety have become more critical during the last decade. Unfortunately, these identification approaches suffer from robustness because of enormous changes in vehicle form, crowded surroundings, different light circumstances, weather, and driving behavior. Therefore, developing automated vehicle and pedestrian detection and tracking systems is a demanding research field in Intelligent Transport Systems (ITS). In recent years, image-based, deep-learning object identification algorithms have become effective autonomous road item recognition agents. The profound learning processes for identifying road vehicles have produced remarkable achievements. However, while numerous types of research have extensively examined using different kinds of deep learning techniques, further studies still need to integrate poor weather conditions with the standard vehicle detection algorithms for profound learning objects. This paper investigates the qualitative and quantitative analyses of four recent deep-learning object-detection algorithms for vehicle and pedestrian identification, i.e., the faster F-RCN, SSD, HoG, and YOLOv7 and classifying weather conditions utilizing a real-time dataset (DAWN). The efficacy of the suggested technique, which superposes the state-of-the-art vehicle identification and tracking methodology under unfavorable and adverse circumstances, is verified by experimental findings.
引用
收藏
页数:6
相关论文
共 39 条
  • [11] Emuna R, 2020, Arxiv, DOI [arXiv:2006.04218, 10.48550/arXiv.2006.04218]
  • [12] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [13] Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues
    Gupta, Abhishek
    Anpalagan, Alagan
    Guan, Ling
    Khwaja, Ahmed Shaharyar
    [J]. ARRAY, 2021, 10
  • [14] Han F., 2006, Performance Metrics for Intelligent Systems 2006 Workshop, P133
  • [15] Vehicle Detection and Tracking in Adverse Weather Using a Deep Learning Framework
    Hassaballah, M.
    Kenk, Mourad A.
    Muhammad, Khan
    Minaee, Shervin
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (07) : 4230 - 4242
  • [16] Deep Learning-Based Pedestrian Detection in Autonomous Vehicles: Substantial Issues and Challenges
    Iftikhar, Sundas
    Zhang, Zuping
    Asim, Muhammad
    Muthanna, Ammar
    Koucheryavy, Andrey
    Abd El-Latif, Ahmed A.
    [J]. ELECTRONICS, 2022, 11 (21)
  • [17] Nighttime Vehicle Detection Based on Bio-Inspired Image Enhancement and Weighted Score-Level Feature Fusion
    Kuang, Hulin
    Zhang, Xianshi
    Li, Yong-Jie
    Chan, Leanne Lai Hang
    Yan, Hong
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (04) : 927 - 936
  • [18] Lingzhe Ma, 2021, Journal of Physics: Conference Series, V1920, DOI 10.1088/1742-6596/1920/1/012034
  • [19] Receptive Field Block Net for Accurate and Fast Object Detection
    Liu, Songtao
    Huang, Di
    Wang, Yunhong
    [J]. COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 404 - 419
  • [20] A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers
    Min, Weidong
    Fan, Mengdan
    Guo, Xiaoguang
    Han, Qing
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (01) : 174 - 186