The Framework That Forgot to Break the Feedback Loop

NIST Cybersecurity Framework 2.0 has been adopted as the canonical risk language by asset owners in critical infrastructure, finance, healthcare, and defence—yet it solves none of the architectural problems that produce the breaches it promises to prevent.

Eighteen months into the 2.0 lifecycle, the pattern is unmistakable: organisations have rebuilt their compliance reporting, renamed their control families, mapped their existing detection tooling to the new Govern, Protect, Detect, Respond, Recover taxonomy, and called it transformation. Meanwhile, the structural vulnerabilities that led to the Snowflake tenant cascade of February 2024, the sustained intrusions into ALPHV's Change Healthcare affiliate in January 2024, and the M&S Scattered Spider incident of 2025 persist unchanged—because NIST 2.0, like its predecessor, codifies the assumption that breach prevention is a function of policy, detection speed, and incident response excellence. It is not.

The Framework—now endorsed by regulators including the SEC (in 4-day breach disclosure rules), NYDFS (Part 500, Part 504), FCA (SM&CR cyber governance), DORA (Digital Operational Resilience Act for EU financial services), NIS2 (Directive for critical infrastructure operators), and APRA CPS 234 (for Australian financial institutions)—has become the lingua franca of institutional risk. That is also the problem. It has legitimised a category error: the belief that managed detection and response (MDR), security information and event management (SIEM), security orchestration and response (SOAR), and endpoint detection and response (EDR) are architecture. They are not. They are palliatives bolted onto networks designed to be compromised and then observed during compromise.

The NIST 2.0 Adoption Narrative: What Industry Actually Did

When the Framework 2.0 was released in February 2024, the stated ambition was architectural: to shift emphasis from compliance to operational resilience, to fold in supply chain risk (SSRM), to embrace continuous adaptation rather than annual audit cycles, and to privilege the organisation's own critical functions over generic control taxonomies. The rhetoric invited heresy. Industry declined.

What happened instead was taxonomic mapping. A typical Fortune 500 financial services firm—subject simultaneously to NIST 2.0 adoption pressure, DORA compliance deadlines (January 2025), and FCA SM&CR cyber governance requirements—took its existing Governance (22 controls), Risk Management (23 controls), Protect (40 controls), Detect (13 controls), Respond (10 controls), and Recover (8 controls) estate and renamed it to align with the new Functions and Categories. The compliance team filed updated risk registers. The CISO updated their board slide deck to reflect the six Functions instead of five. Security operations continued running the same SIEM instance (Splunk, Datadog, Elastic, Microsoft Sentinel, Sumo Logic, or native cloud variants) that had failed to detect the Change Healthcare intrusion for months, or the Synnovis-NHS WeCare incident of June 2024 across both ransomware operators and MSP compromise chains.

The hard truth: none of these organisations had—or have—a substrate that prevents lateral movement, data exfiltration, or privilege escalation. They have detection coverage. They have incident response runbooks. They have supply chain vendor scoring matrices. They do not have architecture that makes the compromise itself strategically unprofitable for the adversary.

NIST 2.0, like the original Framework, assumes a threat model where the adversary is already inside. The Detect and Respond Functions exist because breach is axiomatically inevitable. The Framework does not ask: what if breach is not? It does not offer design principles for systems where critical data is architecturally inaccessible even to authenticated insiders with legitimate access. It does not mandate the isolation of control-plane from data-plane, or the cryptographic partitioning of secrets from computation, or the continuous re-derivation of privilege assertions rather than static role-based access control lists.

Where NIST 2.0 Reveals Its Architectural Floor

The severity of the MGM Resorts and Caesars Entertainment breaches of 2023, both attributed to initial access via credential theft and MFA bypass (the latter notably rapid—Caesars paid a ransom within days to recover their MGM-affiliated data), exposed a category error in the NIST 2.0 Govern and Protect Functions. Both organisations operated security programmes rated "mature" on legacy frameworks. Both had SIEM, EDR, and DLP coverage. Both maintained SOC operations 24/7. Neither had architecture that made stealing and using legitimate credentials strategically impossible.

The Govern Function—which NIST 2.0 positions as the strategic engine for all downstream security—prescribes policy, risk management, oversight, and supply chain management. It does not prescribe what the organisation's data must not look like. It does not demand cryptographic partitioning of data into zones where plaintext is impossible. It does not require that every data-bearing object carry proof of provenance, and that proof itself be hardware-bound or distributed across an external verifier. It codifies the assumption that humans—with governance frameworks, with risk committees, with audit programmes—are a sufficient control on the insider threat and the credential theft that precedes it.

The Protect Function lists 108 control categories across Asset Management, Business Environment, Governance, Risk Management, Supply Chain, Identity Management, Awareness, Physical, Protective Technology, and Data Security. The Data Security subsection does not mandate zero-knowledge substrate design—the principle that plaintext values of critical secrets must be architecturally impossible for even authenticated operators to possess. It recommends encryption in motion and at rest (which an insider with storage access can circumvent), key management best practice (which still assumes the operator environment is trustworthy), and data classification (which is a labelling exercise, not a safety guarantee).

Consider the 2022 Optus breach (9.8 million Australians), the 2022 Medibank breach (9.9 million), the 2023 Latitude Financial breach (14 million), and the 2022 LastPass breach (the vault master password compromise affecting 26 million users). Each organisation had security programmes audited against legacy NIST CSF. Each had implemented NIST-aligned encryption, key management, access control, and monitoring. None had designed systems where the adversary's possession of administrative credentials, vault encryption keys, or access tokens was sufficient to decrypt customer plaintext without an additional, externally-held, cryptographic factor that could not be exfiltrated in the same compromise chain.

This is not a failure of NIST 2.0 compliance. This is the architectural endgame of NIST 2.0 compliance—a state where organisations are maximally observable, maximally auditable, and maximally compromisable by insiders who have passed identity verification and MFA.

The NIST 2.0 Detect–Respond Feedback Loop: Why Speed Cannot Solve Architecture

NIST 2.0 elevated the Detect Function from five Detect categories (reconnaissance, initial access, persistence, impact detection) to eight, and encouraged real-time threat hunting using MITRE ATT&CK-aligned detection rules (YARA, Sigma, custom Snort/Zeek signatures). The logic is familiar: if you cannot prevent breach, detect it fast, contain it faster, recover faster than the adversary can weaponise.

This feedback loop works only if the Mean Time to Detect (MTTD) is shorter than the adversary's Mean Time to Value (MTTV). In practice, MTTD across well-resourced SOCs ranges from hours (for well-known ransomware patterns) to months (for lateral movement by insiders or sophisticated adversaries using forged credentials). MTTV for an insider who has exfiltrated customer PII, source code, or financial transaction records is measured in minutes: upload to a commodity object store, sell on dark web, move on. For a sophisticated nation-state actor, MTTV may be measured in years, if measured at all.

The Optus, Medibank, and Latitude breaches all had post-compromise dwell times measured in months or years before detection. The LastPass master password theft was discovered only after the adversary had rebuilt their own vaulting infrastructure and had exfiltrated customer vault data. None of these organisations were slow in their NIST Detect and Respond Functions—they were adequately staffed, well-tooled, and well-procedured. They were defeated by the fundamental asymmetry: the adversary only needs one successful path into plaintext. The defender needs to block all paths, and to do so before the adversary even enters.

NIST 2.0's elevation of the Detect Function—making it co-equal to Protect, rather than derivative—legitimises the architectural dead-end that has consumed billions in annual spending on EDR, SIEM, and SOAR infrastructure. These tools are not detection architectures. They are detection appliances bolted onto networks where the data they are meant to protect is already plaintext and already accessible to compromised credentials.

Architectural Principles Beyond NIST 2.0: The Post-Breach-Resistant Substrate

The organisations that will not follow the Optus–Medibank–Latitude–LastPass trajectory are those that abandon the assumption that detection is a control, and instead engineer systems where compromise is architecturally unprofitable—where the attacker's possession of legitimate credentials, session tokens, or administrative access is strategically insufficient to extract value.

This requires four design principles that NIST 2.0 does not articulate:

Zero-Knowledge Substrate. Critical data must be stored and transmitted in forms where plaintext is architecturally impossible for even authenticated operators to access. This is not encryption-at-rest with key management in the same trust domain. This is cryptographic partitioning where data is sharded across independent verifiers, or where every access operation requires a real-time proof of justification from an external, air-gapped arbiter. The adversary's theft of a vault encryption key, a database credential, or an administrator's session token is insufficient to decrypt customer plaintext without an additional factor that cannot be exfiltrated in the same breach.

Data-Plane and Control-Plane Separation. The systems that authorise access (identity, policy, audit, alerting) must be architecturally segregated from the systems that execute on data. An insider who compromises the control-plane (the SIEM, the PAM solution, the Azure Entra ID tenant) should have no path to the data-plane. Conversely, an adversary who compromises a data-plane system (a database server, an object store, a file share) should have no ability to modify the audit trail or the access policy that guards it. This requires different hardware trust anchors, different cryptographic roots, different operational personnel.

Continuous Adversarial Posture Drift. Rather than static role-based access control (RBAC) lists or attribute-based access control (ABAC) policies that persist for quarters or years, privilege assertions should be re-derived on every access attempt, should be time-bounded to minutes or seconds, and should be calculated using stochastic models of user behaviour, network topology, and data sensitivity. If an insider's access pattern deviates from historical norm—accessing data outside their functional domain, exfiltrating volume, querying systems they have never queried—the system should not merely alert (the Detect Function); it should deny the request in real-time, re-assert privilege, and force re-authentication under a changed threat model.

Domain-Specific Automation. The Protect and Detect Functions that NIST 2.0 describes as "continuous" are executed by humans, by humans reading SIEM dashboards, by humans running incident response playbooks. True continuous defence is defence that is engineered into the data-plane itself—where the database engine enforces cryptographic proof of every query, where the object store re-evaluates privilege on every read, where the network fabric validates every packet against a real-time threat model. This requires security primitives that are native to the substrate, not orchestrated by SOAR platforms that sit outside the data-plane and react to events after they have occurred.

The Regulatory Misalignment Problem

Regulators have adopted NIST 2.0 as the canonical framework because it speaks their language: Functions, Categories, Outcomes, attestable controls, audit trails. What they have not grasped—and what NIST itself has not grasped—is that a framework that assumes breach is inevitable, and then measures success by speed of response, will never produce the architectural shift that prevents breach in the first place.

DORA requires EU financial institutions to demonstrate operational resilience and third-party security risk management by January 2025. APRA CPS 234 requires Australian financial institutions to design systems that are resilient to "sophisticated cyber threats". NIS2 requires critical infrastructure operators in the EU to implement measures that are "appropriate to the risk". None of these regulatory directives mandate the zero-knowledge substrate design, the control-plane separation, or the continuous re-derivation of privilege that would render these breaches architecturally impossible.

This is not a gap in regulatory language. This is a gap in regulatory understanding of what "resilience" means. NIST 2.0 has been interpreted as "we must be faster at detecting and responding than the adversary is at compromising and extracting". The architecturally sound interpretation is "we must make compromise so unprofitable that the adversary does not attempt it in the first place".

Call to Qualified Operators

If your organisation currently operates under NIST 2.0 compliance, and you have experienced or fear you are vulnerable to insider threat, credential theft, supply chain compromise, or rapid-onset ransomware with legitimate credentials, a briefing on post-breach-resistant architecture is warranted—under executed Mutual NDA.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading