Deep neural networks (DNNs) often encounter significant challenges related to opacity, inherent biases, and shortcut learning, which undermine their practical reliability. In this study, we address these issues by constructing a causal graph to model the unbiased learning process of DNNs. This model reveals that recurrent background information in training samples acts as a confounder, leading to spurious correlations between model inputs and outputs, causing biased predictions. To mitigate these problems and promote unbiased feature learning, we propose the Object-guided Consistency Interpretation Enhancement (OCIE) methodology. OCIE enhances DNN interpretability by integrating explicit objects and explanations into the model's learning process. Initially, OCIE employs a graph-based algorithm to identify explicit objects within self-supervised vision transformer-learned features. Subsequently, it constructs class prototypes to eliminate invalid detected objects. Finally, OCIE aligns explanations with explicit objects, directing the model's attention towards the most distinctive classification features rather than irrelevant backgrounds. Extensive experiments on different image classification datasets, including general (ImageNet), fine-grained (Stanford Cars and CUB-200), and medical (HAM) datasets, using two prevailing network architectures, demonstrate that OCIE significantly enhances explanation consistency across all datasets. Furthermore, OCIE proves particularly advantageous for fine-grained classification, especially in few-shot scenarios, by improving both interpretability and classification performance. Additionally, our findings highlight the impact of centralized explanations on the sufficiency of model decisions, suggesting that focusing explanations on explicit objects improves the reliability of DNN predictions. Our code is available at: https://github.com/DLAIResearch/OCIE.