In the modern era of digitalization and interconnected systems, Application Programming Interfaces (APIs) have become a crucial aspect of data exchange and communication between various software applications. However, with the benefits, APIs introduce new security threats and attack vectors due to their exposure to the internet, making them susceptible to data breaches, service disruptions, and financial losses. Therefore, API security becomes a primary concern that necessitates measures and practices to protect APIs from unauthorized access, data breaches, and other security threats. Continuous monitoring of API behaviors is necessary to detect anomalies. However, it is challenging due to the massive volumes of data generated by APIs in the form of logs, metrics, and traces and the evolving nature of API behaviors. To address this challenge, machine learning (ML) offers a promising solution due to its ability to process vast amounts of data and adapt to dynamic environments. We investigate API access behavior patterns to predict anomalous API behaviors and employ the random forest (RF) model. However, to protect applications from being compromised, it is crucial to understand the underlying causes of anomalies and enable protection measures accordingly. Explainable AI (XAI) is a prominent solution that explains the model's decision for why a particular API usage pattern is considered an anomaly. However, gaining insight from explanations requires expertise, and not every stakeholder or service in the application may have access to the deployed ML and XAI used for monitoring. To address this issue, we have developed a component that extracts information from XAI outcomes and generates a structured report that reflects the XAI outcomes. This ensures that the process for monitoring API behaviors is transparent, interpretable, and trustworthy, and all stakeholders have access to the necessary information to make informed decisions.