Refined Objectification for Improving End-to-End Driving Model Explanation Persuasibility

被引:0
作者
Zhang, Chenkai [1 ]
Deguchi, Daisuke [1 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Furo Cho,Chikusa Ku, Nagoya, Aichi, Japan
来源
2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV | 2023年
关键词
explainability; autonomous vehicles; deep learning; convolutional neural networks; NEURAL-NETWORK;
D O I
10.1109/IV55152.2023.10186742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of deep learning, many end-to-end autonomous driving models with high prediction accuracy are developed. However, since autonomous driving technology is closely related to human life, users need to be convinced that end-to-end driving models (E2EDMs) not only have high prediction accuracy in known scenarios but also in practice for unknown scenarios. Therefore, engineers and end-users need to grasp the calculation methods of the E2EDMs based on the driving models' explanations and ensure the explanations are satisfactory. However, few studies have focused on improving the explanation excellence. In this study, among many properties, we aim to improve the persuasibility of the explanation, we propose ROB (refined objectification branches), a structure that could be mounted to any type of existing E2EDMs. By persuasibility evaluation experiments, we demonstrate that one could improve the persuasibility of the explanations by mounting ROB to the E2EDM. As shown in Fig. 1, the focus area of the driving model accurately shrinks to the important and concise objects on account of ROB. We also perform an ablation study to further discuss each branch's influence on persuasibility. In addition, we test ROB on multiple mainstream backbones and demonstrate that our structure could also improve the model's prediction accuracy.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] An end-to-end TTS model with pronunciation predictor
    Han C.-J.
    Ri U.-C.
    Mun S.-I.
    Jang K.-S.
    Han S.-Y.
    International Journal of Speech Technology, 2022, 25 (4) : 1013 - 1024
  • [22] An end-to-end model for chinese calligraphy generation
    Peichi Zhou
    Zipeng Zhao
    Kang Zhang
    Chen Li
    Changbo Wang
    Multimedia Tools and Applications, 2021, 80 : 6737 - 6754
  • [23] End-to-end deep learning for reverse driving trajectory of autonomous bulldozer
    You, Ke
    Ding, Lieyun
    Jiang, Yutian
    Wu, Zhangang
    Zhou, Cheng
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [24] An end-to-end model for chinese calligraphy generation
    Zhou, Peichi
    Zhao, Zipeng
    Zhang, Kang
    Li, Chen
    Wang, Changbo
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (05) : 6737 - 6754
  • [25] An End-to-End Motion Planner Using Sensor Fusion for Autonomous Driving
    Thu, Nguyen Thi Hoai
    Han, Dong Seog
    2023 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION, ICAIIC, 2023, : 678 - 683
  • [26] End-to-End Deep Learning Model for Corn Leaf Disease Classification
    Amin, Hassan
    Darwish, Ashraf
    Hassanien, Aboul Ella
    Soliman, Mona
    IEEE ACCESS, 2022, 10 : 31103 - 31115
  • [27] Unsupervised End-to-End Deep Model for Newborn and Infant Activity Recognition
    Jun, Kyungkoo
    Choi, Soonpil
    SENSORS, 2020, 20 (22) : 1 - 17
  • [28] Improving end-to-end deep learning methods for Arabic handwriting recognition
    Boualam, Manal
    Elfakir, Youssef
    Khaissidi, Ghizlane
    Mrabti, Mostafa
    Aouraghe, Ibtissame
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [29] End-to-End Autonomous Driving Risk Analysis: A Behavioural Anomaly Detection Approach
    Ryan, Cian
    Murphy, Finbarr
    Mullins, Martin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) : 1650 - 1662
  • [30] Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning
    Chen, Jianyu
    Li, Shengbo Eben
    Tomizuka, Masayoshi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) : 5068 - 5078