From Black-Box Alerts to Actionable Intelligence: Explainable Artificial Intelligence in Cloud Anomaly Detection
DOI:
https://doi.org/10.66320/w22s7m18Keywords:
Explainable AI; Cloud Anomaly Detection; Actionable Intelligence; Cybersecurity Decision Support; Trustworthy AI; Security Operations Center (SOC) Automation; Human-AI Teaming; Deep Learning Interpretability.Abstract
The ubiquitous adoption of cloud computing, characterized by ephemeral microservices, hybrid architectures, and elastic scaling, has fundamentally altered the cybersecurity landscape. To defend these vast and distributed infrastructures, organizations have been forced to deploy increasingly complex Deep Learning (DL) models capable of analyzing high-velocity telemetry. While these advanced neural architectures—spanning Long Short-Term Memory (LSTM) networks to Transformer-based models—excel at identifying subtle, non-linear deviations in high-dimensional data, the resulting detection systems often operate as opaque "black boxes." They generate high volumes of statistical alerts based on complex feature interactions but fail to provide the semantic context necessary for human understanding. This disconnect between the mathematical output of a detection model and the cognitive comprehension required by security operators leads to severe operational dysfunctions, including "alert fatigue," inconsistent triage decisions, and suboptimal incident response times.
Downloads
Published
Issue
Section
License
This is an open access journal which means that all content is freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, without asking prior permission from the publisher or the author. This is in accordance with the BOAI definition of open access. Articles are licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
