Refined Objectification for Improving End-to-End Driving Model Explanation Persuasibility

被引:0
作者
Zhang, Chenkai [1 ]
Deguchi, Daisuke [1 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Furo Cho,Chikusa Ku, Nagoya, Aichi, Japan
来源
2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV | 2023年
关键词
explainability; autonomous vehicles; deep learning; convolutional neural networks; NEURAL-NETWORK;
D O I
10.1109/IV55152.2023.10186742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of deep learning, many end-to-end autonomous driving models with high prediction accuracy are developed. However, since autonomous driving technology is closely related to human life, users need to be convinced that end-to-end driving models (E2EDMs) not only have high prediction accuracy in known scenarios but also in practice for unknown scenarios. Therefore, engineers and end-users need to grasp the calculation methods of the E2EDMs based on the driving models' explanations and ensure the explanations are satisfactory. However, few studies have focused on improving the explanation excellence. In this study, among many properties, we aim to improve the persuasibility of the explanation, we propose ROB (refined objectification branches), a structure that could be mounted to any type of existing E2EDMs. By persuasibility evaluation experiments, we demonstrate that one could improve the persuasibility of the explanations by mounting ROB to the E2EDM. As shown in Fig. 1, the focus area of the driving model accurately shrinks to the important and concise objects on account of ROB. We also perform an ablation study to further discuss each branch's influence on persuasibility. In addition, we test ROB on multiple mainstream backbones and demonstrate that our structure could also improve the model's prediction accuracy.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Autonomous driving in traffic with end-to-end vision-based deep learning
    Paniego, Sergio
    Shinohara, Enrique
    Canas, Josemaria
    NEUROCOMPUTING, 2024, 594
  • [32] Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
    Asai, Haruna
    Hashimoto, Yoshihiro
    Lisi, Giuseppe
    INTELLIGENT HUMAN SYSTEMS INTEGRATION 2020, 2020, 1131 : 111 - 117
  • [33] An End-to-End Model for Chicken Detection in a Cluttered Environment
    Chemme, Komeil Sadeghi
    Alitappeh, Reza Javanmard
    PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 192 - 198
  • [34] End-to-end models for self-driving cars on LPB campus roads
    Mihalea, Andrei
    Samoilescu, Robert
    Nica, Andrei Cristian
    Trascau, Mihai
    Sorici, Alexandru
    Florea, Adina Magda
    2019 IEEE 15TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP 2019), 2019, : 35 - 40
  • [35] Liver Segmentation A Weakly End-to-End Supervised Model
    Ouassit, Youssef
    Ardchir, Soufiane
    Moulouki, Reda
    El Ghoumari, Mohammed Yassine
    Azzouazi, Mohamed
    INTERNATIONAL JOURNAL OF ONLINE AND BIOMEDICAL ENGINEERING, 2020, 16 (09) : 77 - 87
  • [36] EMRNet: End-to-End Electrical Model Restoration Network
    Jia, Zhuo
    Li, Yinshuo
    Lu, Wenkai
    Zhang, Ling
    Monkam, Patrice
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [37] An Annotation Model on End-to-End Chest Radiology Reports
    Huang, Xin
    Fang, Yu
    Lu, Mingming
    Yao, Yao
    Li, Maozhen
    IEEE ACCESS, 2019, 7 : 65757 - 65765
  • [38] Embedding Object Avoidance to End-To-End Driving Systems by Input Data Manipulation
    Jo, Younggon
    Ha, Jeongmok
    Hwang, Sungsoo
    INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2024, : 301 - 313
  • [39] End-to-End Autonomous Driving Through Dueling Double Deep Q-Network
    Baiyu Peng
    Qi Sun
    Shengbo Eben Li
    Dongsuk Kum
    Yuming Yin
    Junqing Wei
    Tianyu Gu
    Automotive Innovation, 2021, 4 : 328 - 337
  • [40] End-to-End Learning with Memory Models for Complex Autonomous Driving Tasks in Indoor Environments
    Zhihui Lai
    Thomas Bräunl
    Journal of Intelligent & Robotic Systems, 2023, 107