With the rapid development of deep learning, many end-to-end autonomous driving models with high prediction accuracy are developed. However, since autonomous driving technology is closely related to human life, users need to be convinced that end-to-end driving models (E2EDMs) not only have high prediction accuracy in known scenarios but also in practice for unknown scenarios. Therefore, engineers and end-users need to grasp the calculation methods of the E2EDMs based on the driving models' explanations and ensure the explanations are satisfactory. However, few studies have focused on improving the explanation excellence. In this study, among many properties, we aim to improve the persuasibility of the explanation, we propose ROB (refined objectification branches), a structure that could be mounted to any type of existing E2EDMs. By persuasibility evaluation experiments, we demonstrate that one could improve the persuasibility of the explanations by mounting ROB to the E2EDM. As shown in Fig. 1, the focus area of the driving model accurately shrinks to the important and concise objects on account of ROB. We also perform an ablation study to further discuss each branch's influence on persuasibility. In addition, we test ROB on multiple mainstream backbones and demonstrate that our structure could also improve the model's prediction accuracy.