Cyberbullying has become a serious societal issue that affects millions of people globally, particularly the younger generation. Although existing machine learning and artificial intelligence (AI) methods for detecting and stopping cyberbullying have showed promise, their interpretability, reliability, and adoption by stakeholders are sometimes constrained by their black-box nature. This study presents a thorough description of Explainable Artificial Intelligence (XAI), which tries to close the gap between AI's strength and its interpretability, for preventing cyberbullying. The first section of the study examines the prevalence of cyberbullying today, its effects, and the limits of current AI-based detection techniques. Then, we introduce XAI, outlining its significance and going through several XAI frameworks and methodologies. The major emphasis is on the use of XAI in cyberbullying detection and prevention, which includes explainable deep learning models, interpretable feature engineering, and hybrid methods. We examine the ethical and privacy issues surrounding XAI in this setting while presenting many case cases. Additionally, we compare XAI-driven models with conventional AI techniques and give an overview of assessment criteria and datasets relevant to the identification of cyberbullying. The article also examines real-time alarm systems, defensible moderator suggestions, and user-specific feedback as XAI-driven cyberbullying intervention tools. We conclude by outlining possible future developments, such as technology advancements, fusion with other developing technologies, and resolving difficulties with prejudice and fairness.