Pothole Detection for Autonomous Vehicles in Indian Scenarios using Deep Learning

被引:1
作者
Srikanth, H. N. [1 ]
Reddy, D. Santhosh [2 ]
Sonkar, Dinesh Kumar [3 ]
Kumar, Ronit [3 ]
Rajalakshmi, P. [4 ]
机构
[1] IIT Hyderabad, Dept Elect Engn, Hyderabad, India
[2] Suzuki Motor Corp, Hamamatsu, Shizuoka, Japan
[3] Suzuki R&D Ctr India Private Ltd, Rohtak, Haryana, India
[4] IIT Hyderabad, Dept Elect Engn, NMICPS TIHAN, Hyderabad, India
来源
2023 IEEE 26TH INTERNATIONAL SYMPOSIUM ON REAL-TIME DISTRIBUTED COMPUTING, ISORC | 2023年
关键词
Autonomous vehicles; Potholes; Indian roads; FRCNN; YOLOv5;
D O I
10.1109/ISORC58943.2023.00033
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Ministry of Road Transport and Highways of India reported that 4,775 and 3,564 road crashes in 2019 and 2020 were due to potholes. Autonomous vehicles are expected to revolutionize transportation in India, but their safe operation depends on effectively detecting potholes. Potholes are typically treated as static objects from the perspective of an autonomous vehicle, and they pose a danger to road commuters, particularly at high speeds. They usually develop during rainy seasons or continuous usage of roads by heavy vehicles like trucks. Over the years, with the advancement of image processing and deep learning, it has become feasible to detect potholes. Plenty of research was done, and several methods were proposed for pothole detection. The work proposed in this paper is improvement along those lines. Besides improving the detection accuracies, we implemented our models on an autonomous testing vehicle. Testing models in real-time made us encounter several bottlenecks in developing an end-to-end solution for pothole detection. Our approach uses Faster Region-based Convolutional Neural Network (FRCNN), and You Only Look Once (YOLOv5) object detection algorithms, which yielded noticeable results after thorough experimentation.
引用
收藏
页码:184 / 189
页数:6
相关论文
共 29 条
  • [1] Anandhalli M., 2022, INT J INFORM TECHNOL, V14
  • [2] [Anonymous], 2007, P 2007 WORKSH NETW S, DOI 10.1145/1326571.1326585
  • [3] Bansal M, 2018, Arxiv, DOI arXiv:1812.03079
  • [4] Benjumea A, 2021, Arxiv, DOI [arXiv:2112.11798, DOI 10.48550/ARXIV.2112.11798]
  • [5] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
  • [6] Cao YS, 2016, 2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), P548, DOI 10.1109/FSKD.2016.7603232
  • [7] Eriksson J, 2008, MOBISYS'08: PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS, AND SERVICES, P29
  • [8] Fang Y, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1661
  • [9] Girshick R, 2015, Arxiv, DOI [arXiv:1504.08083, 10.48550/arXiv.1504.08083Focustolearnmore]
  • [10] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587