Deep learning algorithms have achieved remarkable accuracy in various domains such as image classification, face recognition, sentiment analysis, text classification, healthcare, and self-driving vehicles. However, their complex and opaque structures often hinder their adoption in mission-critical applications. The lack of interpretability in these models raises concerns about their reliability and trustworthiness. To address this challenge, Explainable Artificial Intelligence (XAI) methods have emerged to provide human-comprehensible interpretations of AI outcomes. In this paper, we delve into the latest advancements in XAI techniques, focusing on their methodologies, algorithms, and the scope of interpretations they offer. Our study revolves around evaluating these algorithms based on the quality of explanations they generate, their limitations, and their practical applications. By critically assessing their strengths and weaknesses, we aim to shed light on the potential of XAI methods to bridge the gap between high-performance accuracy and interpretability in deep learning models. Through this comprehensive analysis, we aim to provide a deeper understanding of the state-of-the-art XAI techniques, empowering researchers and practitioners to make informed decisions regarding the integration of explainability into their AI systems. © 2023 Informa UK Limited, trading as Taylor & Francis Group.