Given the critical need for more reliable autonomous driving systems, explainability has become a key focus within the research community. In autonomous driving models, even minor perception differences can significantly influence the decision-making process, and this impact often diverges markedly from human cognition. However, understanding the specific reasons why a model decides to stop or keep forward remains a significant challenge. This paper presents an attribution-guided visualization method aimed at exploring the triggers behind decision shifts, providing clear insights into the underlying "why" and "why not" of such decisions. We propose the cumulative layer fusion attribution method that identifies the parameters most critical to decision-making. These attributions are then used to inform the visualization optimization by applying attribution-guided weights to crucial generation parameters, ensuring that decision changes are driven only by modifications to critical information. Furthermore, we develop an indirect regularization method that increases visualization quality without necessitating additional hyperparameters. Experiments on large datasets demonstrate that our method produces insightful visualization explanations and outperforms state-of-the-art methods in both qualitative and quantitative evaluations.