A Quality Index Metric and Method for Online Self-Assessment of Autonomous Vehicles Sensory Perception

被引:0
作者
Zhang, Ce [1 ]
Eskandarian, Azim [1 ]
机构
[1] Virginia Tech, Dept Mech Engn, ASIM Lab, Blacksburg, VA 24060 USA
关键词
Object detection; Feature extraction; Prediction algorithms; Autonomous vehicles; Computational modeling; Classification algorithms; Indexes; Autonomous vehicle; neural network; computer vision; image processing; image quality assessment;
D O I
10.1109/TITS.2023.3303320
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Reliable object detection using cameras plays a crucial role in enabling autonomous vehicles to perceive their surroundings. However, existing camera-based object detection approaches for autonomous driving lack the ability to provide comprehensive feedback on detection performance for individual frames. To address this limitation, we propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms and provides frame-by-frame feedback on detection quality. The DQI is generated by combining the intensity of the fine-grained saliency map with the output results of the object detection algorithm. Additionally, we have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric. To validate our approach, we conducted experiments on three open-source datasets. The results demonstrate that the proposed evaluation metric accurately assesses the detection quality of camera-based systems in autonomous driving environments. Furthermore, the proposed SPA-NET outperforms other popular image-based quality regression models. This highlights the effectiveness of the DQI in evaluating a camera's ability to perceive visual scenes. Overall, our work introduces a valuable self-evaluation tool for camera-based object detection in autonomous vehicles.
引用
收藏
页码:13801 / 13812
页数:12
相关论文
共 50 条
  • [1] Bochkovskiy A., 2020, ARXIV, DOI DOI 10.48550/ARXIV.2004.10934
  • [2] Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
  • [3] Adaptive Fractional Dilated Convolution Network for Image Aesthetics Assessment
    Chen, Qiuyu
    Zhang, Wei
    Zhou, Ning
    Lei, Peng
    Xu, Yi
    Zheng, Yu
    Fan, Jianping
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 14102 - 14111
  • [4] Ding N., 2023, arXiv
  • [5] Dosovitskiy A., 2020, PREPRINT
  • [6] Draper N.R., 1966, APPL REGRESSION ANAL
  • [7] Research Advances and Challenges of Autonomous and Connected Ground Vehicles
    Eskandarian, Azim
    Wu, Chaoxian
    Sun, Chuanyang
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) : 683 - 711
  • [8] Fu C.Y., 2017, DSSD DECONVOLUTIONAL
  • [9] Ge Z., 2021, ARXIV
  • [10] Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074