The Metric That Measures Your Defeat

Mean Time To Response (MTTR) — the time from detection to containment — has become the performance scoreboard of modern security operations, yet it is a metric that systematically rewards the wrong behaviours and blinds organisations to the failures that matter most.

The appeal is obvious. MTTR is measurable. It sits naturally within the incident response playbook. It fits neatly into Service Level Agreements (SLAs), board dashboards, and regulatory compliance narratives. Security Operations Centre (SOC) teams optimise around it. Vendors sell products that promise to reduce it. Insurance premiums, in some markets, are now indexed against it. But in doing so, the industry has built an apparatus that measures speed of response to a threat that should never have reached the detection layer in the first place — and worse, creates perverse incentives to accelerate the detection machinery whilst leaving the substrate of attack deeply unexamined.

This is not a philosophical complaint. It is a structural failure mode with measurable consequences visible across dozens of real-world breaches, each of which would have been either prevented or fundamentally altered had the organisation's security architecture optimised for what matters instead.

The Industry Narrative: Speed as Virtue

The MTTR dogma crystallised around a simple narrative: the faster you detect and respond to an intrusion, the less damage the attacker can do. This logic appears in the NIST Cybersecurity Framework (NIST CSF) pillar of Detect and Respond; it underpins the SEC's 4-day disclosure rule for material security incidents; it anchors the NYDFS Cybersecurity Requirements for Financial Services Companies (Part 500), which mandates notification within 72 hours of discovery; and it is embedded in the UK's NIS2 Regulations, which require "without undue delay" incident notification.

The metrics machinery that grew from this principle is formidable. Detection-as-a-Service platforms — Splunk, Datadog, Crowdstrike, Microsoft Sentinel, Elastic — built their market value propositions around reducing the time between first suspicious log entry and alert fatigue. Automated response platforms (SOAR) — Palo Alto Networks Cortex XSOAR, Splunk Phantom — were engineered to compress MTTR by automating containment playbooks. Security Information and Event Management (SIEM) vendors published benchmarks: median MTTR in financial services is 5–7 days; in healthcare 3–4 weeks. The messaging was aspirational: get faster, get better.

Real incidents seemed to validate this. The SolarWinds supply-chain compromise of 2020 exposed the cost of slow detection. Nation-state actors leveraged the Orion software updates for months, moving laterally across US government and Fortune 500 environments before discovery. The forensic timeline showed that detection was catastrophically delayed — in some cases 10+ months post-compromise. The lesson drawn by the industry: we need better sensors, faster logging, tighter correlation rules. MTTR became a post-mortem staple: if only we had detected it in hours, not months.

But the framing was already false. The SolarWinds intrusion was not primarily a detection problem. It was a supply-chain assurance problem, a code-review governance problem, a lateral movement control problem. By the time any SIEM rule could have fired, the attacker's persistence was already hardened. Speeding response from month-10 to day-3 would not have prevented the compromise; it would have merely shortened the window within which the attacker had already achieved their objectives. The focus on MTTR distracted from the architectural question: how did untrusted code from a third-party vendor reach the control plane of critical infrastructure?

More recent incidents amplify this structural critique. The Snowflake customer-data cascade of mid-2024 exposed the limits of detection-speed when the breach vector is credential compromise. Attackers accessed Snowflake accounts via stolen credentials (likely via info-stealer malware or compromised development machines). Once inside, they exfiltrated data wholesale — the Ticketmaster compromise, Advance Auto Parts, Lending Club. The MTTR was not the binding constraint. Detection occurred only after data egress was complete. No amount of SOC acceleration could have prevented the exfiltration if the attacker held legitimate credentials and the data plane permitted bulk export. The failure was architectural: the absence of zero-knowledge encryption in the application substrate, the absence of per-row or per-field encryption that made exfiltrated data worthless to the attacker without decryption keys stored elsewhere.

The Change Healthcare ransomware attack of 2024 — attributed to the ALPHV/BlackCat group — illustrates the same dynamic. The attack exploited a known vulnerability (CVE-2024-3156) in a Cisco VPN appliance; lateral movement proceeded for weeks before detection; encryption occurred across systems. MTTR metrics reported a 6–10 day response window. But the operational impact — disruption to US healthcare billing, pharmacy claims processing, labour reallocation across hospitals — could not have been mitigated by faster response alone. The binding constraint was the absence of segmentation between the internet-facing VPN and critical business processes. Faster detection does not eliminate the cost of system rebuild when ransomware has touched the domain controller and file shares. The architecture permitted the attack to propagate faster than any human response team could contain it.

The Perverse Incentive Structure

MTTR optimisation has generated a set of unintended consequences that undermine the security apparatus itself.

First: alert fatigue masquerading as sensitivity. To reduce MTTR, organisations accumulate detection rules — YARA signatures for known malware, Sigma rules derived from MITRE ATT&CK techniques, static file-hash blacklists, network anomaly thresholds. The volume of alerts explodes. Splunk deployments in large financial institutions now generate 10,000+ alerts per hour. The average alert is reviewed in under 30 seconds. The mean time to dismiss an alert (false positive confirmation) is now a metric in its own right. The apparent speed of detection masks the reality: the organisation is looking at everything except the signals that matter. Detection rules are tuned for noise, not for the adversary.

Second: investment misallocation. Budget flows toward detection infrastructure — log aggregation, analytics platforms, threat-intelligence subscriptions — at the expense of prevention and crypto-substrate investments. If MTTR is the score, then sensors are the game. The Gartner Magic Quadrant for Security Information and Event Management (SIEM) has become the purchasing ritual for mid-to-large enterprises, regardless of whether their threat model actually benefits from another Splunk cluster. Meanwhile, zero-knowledge infrastructure, application-layer encryption, cryptographic separation of duties — the investments that would actually prevent breaches — languish in the "nice-to-have" category because they do not immediately improve MTTR metrics.

Third: the encoding of detection-response into the threat model itself. When MTTR is the metric, the organisation implicitly assumes breach is inevitable. The security posture becomes: we will not prevent intrusion; we will detect it faster than the attacker exfiltrates data. This is a losing game against a patient adversary. The Synnovis ransomware attack on NHS pathology services (June 2024) was contained within 48 hours according to incident response metrics — a credible MTTR for a healthcare system. Yet it caused weeks of surgical delays across London hospitals. The attacker had already achieved the business objective: disruption of service. No detection speed corrects that calculus.

Fourth: the optimisation of intermediate steps rather than outcomes. MTTR favours observability of threats over elimination of threat surface. An organisation might reduce MTTR to 4 hours and still face the consequence of every detection: incident response overhead, system downtime, forensic investigation, evidence preservation, legal notification. Each "fast response" is still a response — a cost centre, not a prevention. The metric creates no incentive to eliminate the condition that triggered the response in the first place.

The Structural Failure This Exposes

These perverse incentives expose a deeper architectural failure: the detection-response model assumes that the control plane and data plane operate in the same security domain. If they do, then speed of detection is indeed critical — every second the attacker has access is another millisecond of lateral movement, another file transferred, another credential harvested. The race-to-the-bottom ensues: reduce MTTR or lose.

But this assumption is a choice, not a law of nature. It reflects a decision to build systems in which detection and response are necessary because prevention has been treated as impossible. It reifies the idea that compromise is inevitable and that the organisation's only lever is speed.

The PULSE doctrine rejects this framing. It proposes that the right metric is not speed of response to breach but absence of breach consequence — measured not by MTTR but by a fundamentally different set of architectural primitives.

What to Measure Instead: Post-Breach Resistance

The first principle is this: the organisation should be constructed such that a compromise of the authentication and detection infrastructure (the control plane) does not permit the attacker to access or alter data (the data plane).

Zero-knowledge substrate: encryption keys are not held by the systems that hold the data. If an attacker gains administrative access to a database, the data itself remains opaque to them. This is not new technology — it is a choice to use it. End-to-end encryption in financial messaging, for instance, has been architecturally possible for decades. The reason it is not universal is that it increases operational complexity and shifts the key-management burden to the organisation. But this is precisely the point: making breach irrelevant requires accepting the operational burden of key custody. The metric becomes: percentage of data that remains opaque to an attacker who controls the system it resides on. For a financial services organisation, this should approach 100%. For NHS patient records, the same. The Snowflake cascade would not have occurred if customer data had been encrypted at the field level with keys held in a separate cryptographic substrate.

Continuous adversarial posture adjustment: rather than measuring speed of response to a detected threat, measure the rate and breadth of architectural change in response to observed attacker behaviour. This is not "proactive threat hunting" — it is continuous modification of the attack surface itself. When a detection indicates an attack vector, the response is not merely to contain the incident but to render that vector permanently unavailable. If attackers are exploiting lateral movement via credential delegation in a Microsoft Active Directory environment, the architecture is redesigned to eliminate credential delegation as a primitive. This might mean moving workloads to a zero-trust architecture in which every transaction is re-authenticated and re-authorised, even across internal networks. The metric: how many attacker techniques have been rendered permanently unavailable via architectural change in the past 12 months?

Domain-specific automation in the substrate, not in the SIEM. Rather than writing YARA rules and Sigma signatures in a general-purpose detection platform, embed the security logic into the application and infrastructure substrate itself. A financial transaction system, for instance, should not rely on a SIEM to detect unauthorised fund transfers. The transaction engine itself should enforce the invariant: a fund transfer requires cryptographic proof from two independent key-holders before commit. This cannot be detected around; it is enforced by the system's core design. The metric: number of security-critical operations protected by cryptographic enforcement rather than policy and detection.

Encrypted observability: the organisation still needs to observe its systems, but the observation itself should not be a route to data exposure. Splunk logs, Datadog metrics, and other observability platforms are legitimate attack targets — they often contain unencrypted information about user activity, transaction details, and system internals. Instead, observability should be encrypted end-to-end, with decryption keys held only by authorised analysts and subject to continuous audit. The metric: can observability infrastructure be fully compromised without exposing monitored data?

Reframing the Regulator's Question

The regulatory pressure toward MTTR has been genuine. The SEC's 4-day rule, NYDFS Part 500, NIS2 — all mandate rapid disclosure. But the obligation is to disclose to affected parties, not to detect quickly. An organisation that prevents breach entirely is not required to disclose anything. An organisation that implements post-breach-resistant architecture — such that a compromise of its systems does not result in loss of confidentiality or integrity — faces a different regulatory calculus. No data was accessed; therefore, no breach notification is required.

The regulator's question is not how fast can you respond? but can you ensure that a compromise does not result in data loss? The financial system has already begun to recognise this. The SEC's recent cybersecurity rules (effective February 2024) require disclosure of material breaches; they do not mandate MTTR. APRA's Prudential Standard CPS 234 (Governance of Information Technology) requires Australian Authorised Deposit-Taking Institutions to maintain resilience and incident response capabilities, but it does not prescribe detection speed. The shift, globally, is toward outcome-based regulation: ensure the organisation can survive and operate after compromise.

This is the space in which post-breach resistance becomes not just a security virtue but a regulatory requirement.

The Architecture That Follows

An organisation optimising for post-breach resistance — rather than speed of response — would look radically different from the current security stack.

The control plane and data plane would be cryptographically separated. Authentication and authorisation decisions would be made in a hardened, minimal control plane. The data plane would operate under the assumption that the control plane might be compromised. All data would be encrypted such that loss of control-plane keys does not expose data-plane contents. This is not "layered defence" in the sense of defence-in-depth. It is structural independence.

Lateral movement would be eliminated not by network segmentation (which can be bypassed once a system is compromised) but by cryptographic per-transaction authorisation. Every call to a database, every API request, every file access would require fresh cryptographic proof of authority. This is far more operationally demanding than traditional access controls, but it has a decisive advantage: an attacker who compromises a single workstation cannot trivially extend their access across the organisation.

Incident response would no longer be a time-pressured affair. Because the data plane is post-breach-resistant, response can be measured in days or weeks rather than hours. The organisation can investigate methodically, understand the full scope of compromise, and plan a rebuild that is not rushed.

Detection would shift from volume (how many signals can we aggregate?) to precision (what architectural changes are necessary to eliminate this attack vector?). The SIEM would become a decision-support tool, not a security control. The focus would move from did we detect the attack? to can we eliminate the condition that permitted this attack in the first place?

The Intellectual Demand

This is not a comfort position. It requires organisations to accept that fast response is not a security strength but a failure mode — an admission that prevention has been abandoned. It requires investment in cryptographic infrastructure that complicates operations. It requires a different conversation with regulators, insurance providers, and boards about what security actually means.

But it is the only position that withstands intellectual scrutiny. MTTR metrics have produced a generation of security practitioners optimised for speed of busywork, not prevention of harm. The incidents that matter — Snowflake, Change Healthcare, Synnovis — were not caused by slow response. They were caused by architectures that permitted breach to have consequence.

The question is not how to get faster. It is how to build systems where speed no longer matters.

---

Organisations operating under material confidentiality and integrity constraints are invited to request a technical briefing under executed Mutual NDA.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading