Unlocking Deeper Understanding: Leveraging Explainable AI for API Anomaly Detection Insights

被引:0
作者
Jones, Mike [1 ]
Bayesh, Masrufa [2 ]
Jahan, Sharmin [2 ]
机构
[1] Berryhill High Sch, Tulsa, OK 74107 USA
[2] Oklahoma State Univ, Stillwater, OK 74078 USA
来源
2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024 | 2024年
关键词
API Security; Anomaly detection; Machine Learning; Explainable AI;
D O I
10.1145/3651671.3651738
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the modern era of digitalization and interconnected systems, Application Programming Interfaces (APIs) have become a crucial aspect of data exchange and communication between various software applications. However, with the benefits, APIs introduce new security threats and attack vectors due to their exposure to the internet, making them susceptible to data breaches, service disruptions, and financial losses. Therefore, API security becomes a primary concern that necessitates measures and practices to protect APIs from unauthorized access, data breaches, and other security threats. Continuous monitoring of API behaviors is necessary to detect anomalies. However, it is challenging due to the massive volumes of data generated by APIs in the form of logs, metrics, and traces and the evolving nature of API behaviors. To address this challenge, machine learning (ML) offers a promising solution due to its ability to process vast amounts of data and adapt to dynamic environments. We investigate API access behavior patterns to predict anomalous API behaviors and employ the random forest (RF) model. However, to protect applications from being compromised, it is crucial to understand the underlying causes of anomalies and enable protection measures accordingly. Explainable AI (XAI) is a prominent solution that explains the model's decision for why a particular API usage pattern is considered an anomaly. However, gaining insight from explanations requires expertise, and not every stakeholder or service in the application may have access to the deployed ML and XAI used for monitoring. To address this issue, we have developed a component that extracts information from XAI outcomes and generates a structured report that reflects the XAI outcomes. This ensures that the process for monitoring API behaviors is transparent, interpretable, and trustworthy, and all stakeholders have access to the necessary information to make informed decisions.
引用
收藏
页码:211 / 217
页数:7
相关论文
共 18 条
  • [1] Baye Gaspard, 2021, 2021 INT S NETW COMP
  • [2] Explainable artificial intelligence for cybersecurity: a literature survey
    Charmet, Fabien
    Tanuwidjaja, Harry Chandra
    Ayoubi, Solayman
    Gimenez, Pierre-Francois
    Han, Yufei
    Jmila, Houda
    Blanc, Gregory
    Takahashi, Takeshi
    Zhang, Zonghua
    [J]. ANNALS OF TELECOMMUNICATIONS, 2022, 77 (11-12) : 789 - 812
  • [3] Cohen I, 2009, NOISE REDUCTION SPEE, P1, DOI [DOI 10.1007/978-3-642-00296-0_5, 10.1007/978-3-642-00296-0]
  • [4] High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning
    Erfani, Sarah M.
    Rajasegarar, Sutharshan
    Karunasekera, Shanika
    Leckie, Christopher
    [J]. PATTERN RECOGNITION, 2016, 58 : 121 - 134
  • [5] Guntur Ravi, API SECURITY ACCESS
  • [6] Hajizada Abulfaz, 2023, P 2023 15 INT C MACH
  • [7] Huizinga T., 2019, USING MACHINE LEARNI
  • [8] Hussain Fatima, 2019, IEEE TECHNOLOGY POLI, V4, P1
  • [9] Kromkowski Peter, 2019, 2019 SYST INF ENG DE
  • [10] Lundberg S.M., 2018, ARXIV