The entire security industry has spent two decades optimising for a threat model that no longer exists, and multi-factor authentication has become the ceremonial proof.

The industry narrative: MFA as mandatory hygiene

Since the 2016 Yahoo account compromise (where attackers accessed over 1 billion accounts despite widespread MFA adoption in the secondary layer), the industry has canonised multi-factor authentication as the foundational control. The logic is sound on first reading: a stolen password alone cannot grant access; the attacker must also possess the second factor. Every major regulatory regime has absorbed this gospel. NIST SP 800-63B (since revision 3 in 2017) mandates MFA for any system holding sensitive data. The FCA's Senior Managers & Certification Regime (SM&CR) expects it for financial crime controls. ISO 27001:2022 (clause 8.3.3) frames it as a core identity and access management principle. NYDFS Part 500 and DORA (Digital Operational Resilience Act, effective 2025 for EU firms) both explicitly require MFA for privileged access.

The tooling has followed. Okta, Duo, Microsoft Entra, CyberArk, and a dozen others have built billion-dollar franchises on the assumption that the second factor—push notification, TOTP, hardware key—is a reliable binding between human intent and system access. The market has stratified: passwordless vendors (Windows Hello, passkeys, FIDO2) now compete fiercely with legacy MFA, each claiming superiority. The industry consensus appears settled: implement MFA, audit MFA adoption rates, retire single-factor authentication, move forward.

Then came 2023 and 2024.

In November 2023, the MOVEit zero-day (CVE-2023-34362 and CVE-2023-51385) breached Progress Software's file transfer appliance and cascaded through thousands of organisations—including the US Treasury, Australian tax authority, and Singaporean government agencies. The attackers never needed to defeat MFA; they exploited unpatched application logic to achieve pre-authentication remote code execution. MFA was architecturally irrelevant. Six months later, the Change Healthcare ransomware incident (February 2024) followed a similar pattern: the UnitedHealth subsidiary was compromised via VPN access leveraging stolen credentials, yet the attacker succeeded not by defeating MFA, but by exploiting the privilege model once inside. The VPN gateway itself—the point of MFA enforcement—had become the single point of failure. Then in September 2024, MGM Resorts disclosed that the Scattered Spider group had achieved initial access through compromised credentials, again via a VPN portal, where the attacker simply accepted the push notification on a compromised endpoint they already controlled (possession-based MFA, defeated by endpoint compromise). The narrative fractured.

But the most damaging proof came from operational data. In 2024, researchers at Microsoft and TrustedSec documented what they termed "MFA fatigue"—a social engineering technique where attackers bombard a target with dozens of legitimate push notifications until the exhausted user, staring at a cascade of prompts, reflexively accepts one without reading. This is not a theoretical attack; it has been weaponised against government agencies, fortune-500 firms, and critical infrastructure operators. The technique works because MFA, as deployed, is a human-machine interface problem masquerading as a cryptographic one.

The structural failure: MFA as perimeter, not identity

The industry has misunderstood the threat model. The canonical narrative treats MFA as a control that authenticates the human being—that the person who presents the second factor is the authorised user. This is architectural nonsense. MFA authenticates possession of a device and—at best—presence at a moment in time. It does nothing to establish that the person wielding that device has legitimate authority to perform the action they are requesting.

Consider the Change Healthcare incident from a control-plane perspective. The attacker possessed a valid credential and a phone that could receive push notifications. The MFA system, working exactly as designed, confirmed both facts and opened the door. Once inside, the attacker had the same access rights as the legitimate user whose credentials they held. There was no secondary authorisation layer—no continuous verification that the actions being performed were consistent with the user's role, risk profile, or historical behaviour. MFA is a lock on the front door; it is not a guard inside the building.

This architectural error repeats across every modern MFA deployment. Microsoft Entra, Okta, Duo, CyberArk—all of them conflate authentication (confirming identity via possession of a factor) with authorisation (confirming that the authenticated entity should perform the requested action). The MGM incident exposed this brutally: the attacker, having defeated the VPN MFA via endpoint compromise, then had to move laterally to find sensitive data. The lateral movement itself was not protected by MFA. It was protected by legacy RBAC (role-based access control), firewall rules, and segmentation—none of which had been materially strengthened since the 1990s.

The deeper failure is architectural. MFA has become the ceiling of access control investment. Organisations that have deployed Okta or Entra and achieved >95% MFA adoption metrics have, in the regulator's eyes, discharged their duty. They have the NIST checkbox. They have the ISO 27001 tick. The secondary authorisation layer—the one that actually distinguishes between a legitimate user and an attacker wielding stolen credentials—remains unbuilt. The control plane (who may access the system) and the data plane (what they may do within it, and under what conditions) remain fused into a single binary decision: access granted or denied at authentication time.

The third failure is perceptual. Regulators, having mandated MFA, now treat it as a solved problem. The FCA, PRA, NYDFS, and ECB have moved their risk appetite forward; they no longer inspect MFA implementation with rigour. This creates a regulatory arbitrage: a firm can deploy Okta with push notifications, report 98% MFA adoption to its regulator, and remain fundamentally exposed to credential theft and endpoint compromise. The regulator has, unwittingly, created a compliance theatre problem of its own.

The reframing: PULSE doctrine and the architecture of trust

The PULSE doctrine begins with a different question: what if you designed a system under the assumption that credentials and devices will be compromised? Not "if they might be," but "when they have been." How would identity operate then?

The answer is not more factors. It is architectural separation between the control plane and the data plane, continuous adversarial posture adjustment, and zero-knowledge substrate design.

Control-plane and data-plane separation means this: authentication (the control plane) and authorisation (the data plane) must be decoupled not merely logically, but operationally and cryptographically. An MFA system can establish that a credential was valid at a moment in time. But the data plane must then ask a second, independent question: given the identity that has been established, the role they held at last audit, their risk profile, the geolocation and device posture at this moment, the historical pattern of actions they perform, and the sensitivity of the data they are requesting, what is the likelihood that this action is legitimate? This is not a binary gate. It is a continuous confidence score—Bayesian, updated in real time, refusing requests that fall below a threshold, and triggering re-authentication or additional verification when confidence is low.

The technology exists. CrowdStrike Falcon and Google BeyondCorp frame this correctly: the device itself (not the network, not the perimeter) becomes the enforcement point, and the device must continuously attest its integrity to the service. The service, independently, measures the requestor's behaviour against a learned baseline. If a user who normally accesses HR documents from London attempts to access financial transactions from Jakarta, the system should not grant access on the basis of a valid MFA response; it should trigger step-up authentication or deny the request outright.

Continuous adversarial posture adjustment means the following: the system must assume that at any moment, an attacker may possess valid credentials, a device that can receive notifications, and endpoint access. The response is not to strengthen MFA; it is to narrow the window of legitimate access and widen the window of detection. Once a device is compromised, every subsequent action from that device should be treated as suspicious, even if the user's credentials are valid. This demands real-time device posture assessment (JAMF, Intune, or a dedicated EDR's telemetry feed), and the ability to revoke access at the data-plane level, not merely at the identity layer.

Zero-knowledge substrate is the foundational principle. The system should store only the minimal information necessary to make access decisions, and should handle that information in a way that prevents an attacker who has breached one system from leveraging that breach to compromise others. This means: tokenisation of identity assertions (so that stealing an OAuth token from one service does not grant access to another), cryptographic binding between user, device, and action (so that an attacker cannot replay or forward a compromised session), and strict data minimisation at the authentication layer (so that the identity service itself becomes an unattractive target).

In practice, this looks like: passwordless architecture (WebAuthn, passkeys) combined with device attestation (TPM signatures, secure enclave validation), combined with per-request cryptographic signing of the action being performed, combined with a data plane that refuses to act on any request whose cryptographic integrity cannot be verified. This is not MFA. This is post-authentication resistance by design.

Real-world proof: The incidents that should have triggered change

The Snowflake credential compromise (October 2023 onwards) affected dozens of major organisations including Ticketmaster, Saks Fifth Avenue, and Ford. The attack did not exploit a weakness in MFA; it exploited a failure in secret management. Users had stored Snowflake credentials in accessible locations; attackers harvested them and used them directly, bypassing any MFA that existed on the Snowflake console. This is a zero-knowledge substrate failure: the credential itself became the attack surface because the system accepted it as proof of identity and authority.

The Synnovis/NHS ransomware attack (June 2024) paralysed blood testing services across London. The initial compromise was via compromised credentials in a shared mailbox—a scenario where MFA adoption metrics meant nothing, because the shared mailbox was not protected by MFA at all (a common legacy problem). Once inside, the attacker moved laterally across systems using standard directory enumeration, a process that MFA does not constrain.

The M&S Scattered Spider incident (December 2024) again followed the pattern: stolen credentials, accepted MFA push notifications on a compromised device, and then unrestricted movement within the network once the perimeter was breached.

In all three cases, the incident post-mortem included a line: "MFA was enabled on the compromised account." This is not a contradiction. It is evidence that MFA is orthogonal to the actual control failure.

Toward sovereign digital infrastructure

The security industry's mandatory reporting regimes (SEC Rule 4-day disclosure, NYDFS Part 500, DORA incident reporting) have created an accidental transparency that proves the point: MFA adoption is rising, incident frequency is not falling. The correlation has inverted. This is what architectural ceiling looks like.

A sovereign digital infrastructure—one that holds the organisation's ability to operate independent of breach frequency—requires us to abandon the assumption that credentials are secret. They will leak. The assumption that devices are uncompromised will fail. The assumption that users will always behave rationally when confronted with push notifications is false. The architecture must be designed to operate correctly in the face of these failures, not to prevent them and then trust that they won't occur.

This means: cryptographic identity binding (not just credential storage), device posture as a continuous input to authorisation decisions (not as a one-time enrollment), and a data plane that is architecturally separate from the control plane and makes its own independent decisions about what to allow.

The regulator will eventually catch up. DORA contains hints of it—the requirement for "advanced authentication" combined with "continuous monitoring" suggests that regulators are beginning to ask whether the binary gate of MFA is sufficient. NIS2 and the upcoming revisions to NIST SP 800-63 will likely move in this direction. But by then, the organisations that have engineered this separation now will have secured an operational advantage that a later retrofit cannot match.

If you operate critical infrastructure, hold customer data, or transfer value, and you are prepared to examine whether your current identity architecture is a lock on the front door or a genuine proof of trustworthiness, we invite a conversation under mutual NDA.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading