From Black-Box Alerts to Actionable Intelligence: Explainable Artificial Intelligence in Cloud Anomaly Detection
Keywords:
Explainable AI; Cloud Anomaly Detection; Actionable Intelligence; Cybersecurity Decision Support; Trustworthy AI; Security Operations Center (SOC) Automation; Human-AI Teaming; Deep Learning Interpretability.Abstract
The ubiquitous adoption of cloud computing, characterized by ephemeral microservices, hybrid architectures, and elastic scaling, has fundamentally altered the cybersecurity landscape. To defend these vast and distributed infrastructures, organizations have been forced to deploy increasingly complex Deep Learning (DL) models capable of analyzing high-velocity telemetry. While these advanced neural architectures—spanning Long Short-Term Memory (LSTM) networks to Transformer-based models—excel at identifying subtle, non-linear deviations in high-dimensional data, the resulting detection systems often operate as opaque "black boxes." They generate high volumes of statistical alerts based on complex feature interactions but fail to provide the semantic context necessary for human understanding. This disconnect between the mathematical output of a detection model and the cognitive comprehension required by security operators leads to severe operational dysfunctions, including "alert fatigue," inconsistent triage decisions, and suboptimal incident response times.




















