Reliability Evaluation of Visualization Performance of Convolutional Neural Network Models for Automated Driving

被引:0
作者
Zhang C. [1 ]
Okafuji Y. [1 ]
Wada T. [1 ]
机构
[1] Ritsumeikan University, Graduate School of information science and engineering, 1-1-1 Nojihigashi, Shiga, Kusatsu
来源
Zhang, Chenkai (zhang1354558057@gmail.com) | 1600年 / Society of Automotive Engineers of Japan卷 / 12期
基金
日本学术振兴会;
关键词
Autonomous Vehicle Navigation [C1; CNN; Vision Systems;
D O I
10.20485/JSAEIJAE.12.2_41
中图分类号
学科分类号
摘要
As deep learning methods in image recognition have achieved excellent performance, researchers have begun to apply CNNs(convolutional neural networks) to automated driving. However, the explainability for the decision making of automated driving is highly desired. In order to trust the model in automated driving, visualization methods have become important for understanding the internal calculation process of CNNs. Therefore, in a previous study, we proposed a method to evaluate the visualization performance of CNN models by using a mathematical model instead of a human driver to generate a dataset that can determine the ground-truth point in images. However, the reliability of the proposed method for validating the visualization performance was not provided. Therefore, in this paper, we verify the proposed method through two experiments to demonstrate the task-dependent performance and visualization performance during training. The reliability of the visualization performance has been demonstrated through experimental results. Therefore, we proposed an evaluation method for visualization performance in automated driving systems. Copyright © 2021 Society of Automotive Engineers of Japan, Inc. All rights reserved
引用
收藏
页码:41 / 47
页数:6
相关论文
共 15 条
  • [1] Grigorescu S., Trasnea B., Cocias T., Macesanu G., A survey of deep learning techniques for autonomous driving, Journal of Field Robotics, 37, 3, pp. 362-386, (2020)
  • [2] Bojarski M., Testa D. D., Dworakowski D., Firner B., Flepp B., Goyal P., Jackel L. D., Monfort M., Muller U., Zhang J., Zhang X., Zhao J., Zieba K., End to End Learning for Self-Driving Cars, (2016)
  • [3] Yang Z., Zhang Y., Yu J., Cai J., Luo J., End-to-end multimodal multi-task vehicle control for self-driving cars with visual perception, Clinical Orthopaedics and Related Research, (2018)
  • [4] Bach S., Binder A., Montavon G., Klauschen F., Muller K. R., Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLOS ONE, 10, 7, (2015)
  • [5] Zhou B., Khosla A., Lapedriza A., Oliva A., Torralba A., Learning deep features for discriminative localization, CVPR, (2016)
  • [6] Simonyan K., Vedaldi A., Zisserman A., Deep inside convolutional networks: visualising image classification models and saliency maps, ICLR Workshop, (2014)
  • [7] Bojarski M., Choromanska A., Choromanski K., Firner B., Jackel L. D., Muller U., Zieba K., VisualBackProp: visualizing CNNs for autonomous driving, (2016)
  • [8] Zeiler M. D., Fergus R., Visualizing and understanding convolutional networks, ECCV, pp. 818-833, (2014)
  • [9] Miller T., Explanation in artificial intelligence: insights from the social sciences, Artificial Intelligence, 267, pp. 1-38, (2019)
  • [10] Samek W., Binder A., Montavon G., Bach S., Montavon G., Muller K.-R., Evaluating the visualization of what a deep neural network has learned, IEEE Transactions on Neural Networks and Learning Systems, (2016)