Breast cancer is still a big health issue around the world, and it needs to be found quickly and perfectly to improve patient outcomes and lower death rates. Although artificial intelligence (AI) has showed amazing promise in breast cancer prediction mainly machine learning (ML) algorithms as well as deep learning (DL), practical use of these models is greatly hampered by their lack of interpretability and transparency. By giving complicated AI models interpretability, explainable artificial intelligence (XAI) becomes an essential tool to improve trust and transparency. XAI's efficacy in clinical environments is yet perfectly unidentified however, and its proper implementation into breast cancer diagnostics is hence ignored. Focussing on their interpretability, clinical application, and influence on decision-making, this paper systematically reviews machine learning, deep learning impact on breast cancer diagnosis and current XAI approaches used to breast cancer detection, prognosis, and treatment. This work presents a thorough assessment of XAI approaches classified by data kinds (imaging, genomic, and clinical), a comparative analysis of LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and Grad-CAM, and highlights important issues and future directions of research. This work highlights the possibility of XAI to enhance clinical decision-making and patient confidence by closing the gap between great diagnosis accuracy and interpretability. The results support multidisciplinary cooperation among medical experts, scientists in artificial intelligence, and legislators to guarantee the responsible and ethical integration of artificial intelligence in society.