The industry keeps hiring faster than the alerts stop crying
Every large financial institution, healthcare system and critical infrastructure operator running a SIEM—Splunk, IBM QRadar, Elastic Security, Microsoft Sentinel, ArcSight—is fighting the same mechanistic failure: operators are drowning in 50,000 to 500,000 daily alerts, tuning rules into silence, missing genuine compromise signals, and burning out three-year cycle cohorts of SOC talent. The consensus diagnosis is staffing shortage. The consensus remedy is training pipelines, offshore SOC, and SOAR automation to "correlate and reduce noise." None of this works, because alert fatigue is not a hiring problem—it is an architectural defect embedded into SIEM itself, and every popular remediation makes the disease worse.
The Industry Narrative: Staffing as Root Cause
The past three years have produced a substantial and consistent reporting thread. In 2022, after the LastPass compromise, the industry reported that attackers had moved laterally for four months undetected within a flat, instrumented network—detection delay was estimated at 16+ weeks. The compromise was eventually identified through log forensics, not real-time alerting. In parallel, major incident after-action reports from Medibank (October 2022), Optus (September 2022), and Latitude (February 2023)—all Australian-regulated entities—revealed that security operations teams had been alerted to anomalous activity but the signal was buried beneath thousands of benign events, tune-outs, and rule thresholds set so high they approached uselessness.
The dominant industry narrative coalesced around "alert fatigue." Gartner, SANS, and major incident response firms published white papers citing studies that SOC analysts receive 5–10 alerts per minute per analyst (some estimates run to 200–300 per shift), and that true-positive rates in typical SIEM deployments hover between 1–5%. The conclusion: hire more people, train them faster, build better analytics teams, implement SOAR playbooks to automate triage, use machine learning to "prioritise" alerts. By 2024, this thesis had become so embedded that CISO investment roadmaps routinely allocated 40–60% of security budgets to "SOC expansion" and machine-learning alert tuning.
This narrative is not false. The staffing crisis is real. But it is a symptom, not a cause.
The Architectural Error: Detection-at-Scale Cannot Work
The SIEM architecture—centralised log aggregation, rules-based correlation, alert generation at scale—is fundamentally incompatible with human-speed decision-making. This is not a personnel failure. It is a design failure.
Consider the mechanical reality. A typical enterprise SIEM ingests telemetry from hundreds of thousands of endpoints, network segments, and cloud services. Each generates events at 100–10,000 Hz under normal operation. A Windows domain controller generates 500–1,000 logon events per minute during business hours. A DNS resolver sees 1–10 million queries per day. A Web Application Firewall logs every HTTP request, every SSL handshake failure, every rate-limit breach. A cloud data warehouse (Snowflake, Redshift, Databricks) logs every authentication, every data access, every SQL statement, every configuration change.
The SIEM must ingest, parse, normalise, and correlate this stream in real-time, applying hundreds or thousands of detection rules (written in Sigma, YARA, Splunk SPL, KQL, Elasticsearch query syntax) to identify patterns consistent with MITRE ATT&CK technique IDs—T1566 (phishing), T1547 (persistence mechanisms), T1190 (exploitation of public-facing applications), T1485 (data destruction). When a correlation rule fires—say, a spike in failed logons followed by lateral movement on unusual ports—an alert is generated.
But here is the problem: the cost of false positives in detection-at-scale is always higher than the cost of false negatives, because detection rules are reactive. The defender writes a rule to catch yesterday's known-bad behaviour. The attacker, by definition, is doing something not yet in the rule set. Worse, legitimate business activity—bulk user onboarding, scheduled backups, system patching, cloud API calls, data exports—generates event patterns identical to attack patterns. The rule must be tuned to avoid overwhelming the team. The threshold is raised. The sensitivity drops. Attackers are let through.
The MOVEit vulnerability exploitation chain (CVE-2023-34362, May 2023) illustrates this precisely. Attackers exploited an unauthenticated file transfer endpoint in Progress MOVEit Transfer, which was used by thousands of enterprises for secure file exchange. The exploitation was detectable: unusual file operations by the MOVEit service account, web server processes spawning command-line shells, outbound connections to attacker infrastructure. But in organisations with active SIEMs, these events were either logged as low-priority application alerts (file transfer jobs are expected to read and write files) or were never correlated into a single alert because the rules were written to detect named threat groups (using campaign-specific infrastructure IOCs), not exploitation patterns. The compromise cascade—affecting US health systems, Latin American banks, European public broadcasters, and Asian manufacturers—lasted 2–4 weeks per victim before discovery.
The Snowflake tenant compromise chain (October 2023 onwards, publicly disclosed early 2024) is more instructive still. Attackers acquired valid credentials for Snowflake customers through credential stuffing and malware-infected developer machines. Once authenticated, they read data—billions of records from healthcare systems, financial services, retail, SaaS platforms. The compromises were not detected by Snowflake's own alerting or by customers' SIEMs. Why? Because reading data is what Snowflake does. There is no detection rule that says "if analyst account X reads Y bytes of data, alert." The analytics would collapse the business. The defender cannot out-alert an attacker who is doing legitimate-looking things with stolen credentials.
Only later, when affected parties began to receive ransom demands and extortion letters, and when threat intelligence community members (including GreyNoise and Shodan indexes) noticed unusual data export patterns, did the compromises become visible. By then, months of data exfiltration had occurred.
Why Standard Remediation Deepens the Problem
The industry response has been to add more layers of detection. Implement SOAR (Security Orchestration, Automation and Response) to correlate alerts faster. Add machine learning to rank alerts by "risk score." Hire alert-tuning engineers to write custom rules. Deploy EDR (Endpoint Detection and Response) agents to every workstation, capturing process execution, network connections, file modifications. Build security data lakes to store logs for forensic correlation. Hire threat intelligence teams to feed IOCs and threat-actor patterns into the SIEM.
Each intervention adds complexity and, paradoxically, more noise. EDR solutions like Microsoft Defender for Endpoint, CrowdStrike Falcon, and SentinelOne generate their own alert streams—often at higher fidelity than SIEM rules—but are not integrated into the central alert workflow, creating alert fragmentation. SOAR systems must be configured with hundreds of playbooks, each requiring tuning to avoid runaway automation. Machine learning models, trained on historical alert data that includes thousands of false positives, learn to replicate the biases of the human analysts who previously tuned the rules—they simply automate the mistakes faster.
The NHS Synnovis attack (June 2024), attributed to LockBit and executed via a ransomware-as-a-service affiliate, is a case study in this failure mode. NHS trusts had invested heavily in EDR and SIEM expansion following the 2017 WannaCry incident. Yet the Synnovis compromise—involving reconnaissance, lateral movement, and encryption across hundreds of machines—proceeded undetected for weeks. The attacker's techniques (T1087 account discovery, T1021 lateral tool transfer, T1486 data encrypted for impact) were detectable. But the relevant signals were drowned in the alert stream generated by EDR agents reporting routine system activity, Windows Defender signature updates, and misconfigured network sensors reporting transient connectivity blips. The SOC was in alert-fatigue paralysis. By the time human analysts reviewed the relevant EDR timeline forensically, 80% of critical systems were encrypted.
The Scattered Spider attack on M&S (January 2025), still under investigation but with publicly disclosed operational details, demonstrates the same architectural failure. The threat group used social engineering and legitimate cloud provisioning APIs to establish persistence. The attacks were not detected in real-time by either EDR or SIEM because, again, legitimate cloud engineers use cloud provisioning APIs. The only effective detection was post-compromise forensics—reviewing historical logs after the breach was discovered through other means (likely exfiltration detection outside the enterprise network).
The pattern is clear: in every major incident of the past three years, the SIEM and EDR alerts either did not fire, or fired so frequently that human analysts could not prioritise the signal. The standard remedy—more alerts, better ML, faster SOAR—is mechanically incompatible with the constraint that humans must make the decisions.
The PULSE Reading: Architecture, Not Detection
The PULSE doctrine proceeds from a different premise: you cannot catch every breach in real-time, so build systems that are hard to breach and that degrade gracefully when breached. This is not EDR. It is not SIEM. It is not detection-and-response. It is post-breach resistance through architecture.
The principle of zero-knowledge substrate is central. If data is encrypted client-side with keys held only by the data owner, stored in isolated compartments with no cross-account access, and accessed only through cryptographically verified channels, then the attacker who gains a single credential or compromises a single machine learns nothing. They cannot read data. They cannot move laterally. They cannot exfiltrate. They are visible only when they attempt to break the isolation—and that breakage can be detected through substrate-level audit trails, not through rule tuning.
Separate control-plane from data-plane. The control-plane (provisioning, configuration, policy) is a small, hardened, immutable attack surface. The data-plane (data storage, processing, transfer) is designed so that every transaction is cryptographically auditable and every access attempt is logged at the encryption/decryption boundary, not at the application layer where logs can be manipulated. An attacker who breaches the data-plane cannot modify the control-plane audit trail. An attacker who steals a credential cannot use it to access compartments they are not authorised for because the cryptographic binding is enforced at the hardware/firmware layer, not at the software SIEM layer.
Adaptive active defence means that the system's threat posture—encryption key rotation, anomaly thresholds, network routing rules, access control policies—is continuously adjusted based on the threat environment. Not in response to a SIEM alert (which arrives too late), but continuously and autonomously, following predetermined rules. When a data-plane access pattern deviates beyond cryptographic bounds (e.g., a credential is used from an unexpected geographic location, or to access a compartment that shares no common business logic with previous accesses), the system does not generate an alert for a human to tune. It revokes the credential, rotates the relevant keys, and seals the compartment. The attacker is locked out. The forensic record is immutable.
Domain-specific automation—encryption and access control built into the database substrate, not into a SIEM rule. Anomaly detection built into the network gateway firmware, not into an IDS signature. Lateral-movement prevention built into the cloud IAM policy engine, not into a SOAR playbook. When the substrate is the security mechanism, there is no human-readable alert to be fatigued by, because the attack is prevented before the event reaches the SIEM.
The Operational Implication: From Response to Resistance
Organisations that have adopted post-breach resistance architecture report a qualitative shift in security operations. Alert volume does not decrease—it is nearly eliminated from the operational workflow. Instead of a SOC team managing 50,000 daily alerts, they manage 100–200 high-confidence, low-noise incidents per year, drawn from cryptographic audit trails and substrate-level anomalies. These incidents are often detected post-compromise (because some intrusions will succeed), but the attacker's ability to move laterally, escalate, or exfiltrate is architecturally blocked. The SOC's job becomes forensic investigation and incident containment, not real-time alert triage.
This is not theoretical. Organisations handling regulated data—financial transactions, healthcare records, critical infrastructure control systems—have been moving toward this model for 18–24 months. The regulatory environment is accelerating the shift. The FCA's SM&CR (Senior Managers and Certification Regime) now places explicit liability on firm leadership for cyber-resilience architecture, not detection-and-response maturity. APRA CPS 234 (risk management) requires Australian financial institutions to demonstrate that critical data and systems are architecturally isolated and cryptographically protected. DORA (Digital Operational Resilience Act), effective January 2025, mandates that EU financial entities test and verify their post-breach resilience, not their detection speed. NIS2 imposes similar requirements across critical infrastructure sectors.
The old playbook—SIEM, EDR, SOAR, ML tuning, alert reduction—cannot meet these regulator demands because it is fundamentally reactive. The new playbook—zero-knowledge substrate, cryptographic audit, adaptive posture, domain-specific isolation—is inherently post-breach resistant and regulatory-aligned.
The Invitation
Senior operators responsible for data sovereignty, resilience architecture, or regulated data protection who believe their organisation's security operations are constrained by detection-at-scale should request a structured briefing under mutual NDA.
Request a briefing under executed Mutual NDA.
PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.
Request Briefing →