The Post-Quantum Migration Will Repeat Your Cryptographic Failures

The publication of NIST's Final Selected Algorithms for General-Purpose Cryptography in August 2024 — ML-KEM (key encapsulation), ML-DSA (digital signatures), and SLH-DSA (stateless hash-based signatures) — is being treated by industry as a timeline problem: when to swap algorithms, which vendors to trust, how long you have before quantum computers render your RSA-2048 and ECDP-256 worthless. This framing is dangerously incomplete. It assumes that the problems which produced today's cryptographic compromise — pervasive key management failure, cryptographic binding to compromised infrastructure, detection-centric validation schemes, vendor lock-in across the cipher stack — will somehow evaporate once the algorithm IDs change. They will not. You will migrate to post-quantum primitives using the same architectural rot that allowed threat actors to steal 9.6 billion records in 2024, harvest plaintext across air-gapped networks, and neutralise vendor-supplied detection at will.

The deeper issue: post-quantum cryptography is being positioned as a technical problem when it is fundamentally an infrastructural one. And your infrastructure — the way you generate, store, rotate, bind, and attest keys; the way you validate cryptographic operations; the way you separate control from data; the way you assume detection works — is optimised for the forensic convenience of your incident-response vendor, not for the structural resistance you actually need.

The Industry Narrative: Timeline and Compliance

NIST's post-quantum cryptography project, launched in 2016, selected three algorithms after eight years of vetting: ML-KEM (Kyber, lattice-based, for key establishment), ML-DSA (Dilithium, lattice-based, for signatures), and SLH-DSA (SPHINCS+, hash-based, for scenarios requiring pre-quantum signature confidence). The consensus is clear — quantum computers capable of breaking 256-bit elliptic curves remain a decade away, but the "harvest now, decrypt later" threat (HNL) has already begun. State-sponsored actors are presumed to be storing encrypted traffic today, waiting for quantum capability to become operational.

The regulatory response has been swift. NIST issued Migration Readiness Guidance in April 2024, targeting federal agencies and critical infrastructure operators. The CISA Post-Quantum Cryptography Project has published migration roadmaps tied to NIST CSF and DORA (Digital Operational Resilience Act) timelines. European telecommunications operators under the NIS2 Directive now face explicit PQC transition requirements by 2034 — earlier for operators of "essential services". The SEC's recent cybersecurity disclosures rule expands the definition of material breaches to include "cryptographic compromise" scenarios, which explicitly encompasses the transition period where dual-algorithm stacks coexist. Insurance underwriters are already factoring PQC migration timeline compliance into cyber-liability premiums.

Real incidents have crystallised the risk. The Synnovis ransomware attack (June 2024) did not itself involve cryptographic compromise, but its cascade through the NHS supply chain demonstrated the fragility of organisations sharing cryptographic material across multiple vendors without independent key custody. The Scattered Spider campaign against MGM Resorts (September 2023) and subsequent Caesars Entertainment breach (August 2023) both exploited weak key rotation cycles — legacy RADIUS and Kerberos key material was exfiltrated because rotation policies were never enforced at the cryptographic substrate level. In both cases, the ability to forge authentication tokens persisted for weeks because organisations had conflated "changing passwords" with "rotating cryptographic material".

The proposed industry solution is mechanical: inventory all uses of RSA, ECDP, and SHA-1; flag deadline dates; select a NIST PQC algorithm; begin dual-algorithm deployment; phase out classical algorithms by the assigned sunset date. Vendors have published migration roadmaps: AWS is offering KMS support for PQC algorithms starting 2025; Microsoft Azure Key Vault added PQC signing in preview; Fortanix, QuintessenceLabs, and others are offering PQC-aware HSMs. CISA has published software supply-chain guidance. The problem is treated as a supply-chain and vendor-selection problem.

The Architectural Failure You Are Repeating

The industry narrative assumes that algorithmic replacement is the hard part. It is not. The hard part — the thing that has made every previous cryptographic transition painful, incomplete, and vulnerable to corner-case compromise — is that you do not know where your cryptographic material actually is, who holds it, how it is rotated, or how it is bound to the infrastructure that uses it.

Consider the LastPass breach (August 2022). The attack did not break LastPass's AES-256 encryption. It broke the key management surrounding it. Threat actors gained access to the vault storage, but the critical compromise was that the encryption key derivation was tied to a master password, which was stored in a database to which the attacker also had access. The entire "security" of the system was a procedural assumption — that users would choose strong master passwords — bolted onto an architecture that had already collapsed. LastPass migrated its architecture multiple times after disclosure, but it is still selling a model in which key derivation and key storage are coupled.

Now consider the Optus breach (September 2022). The attack compromised 9.8 million customer records over a ten-month period. The underlying cryptographic primitives (they were using reasonable algorithms) were never touched. The failure was in key-dependent access control: API keys were hardcoded in public GitHub repositories, allowing unfettered read access to encrypted customer data. The attackers did not decrypt anything — they did not need to. The key management was so decoupled from the data it protected that the distinction became meaningless.

The Medibank attack (October 2022) was similar: 9.7 million records compromised, encryption in place but irrelevant because the keys were accessible from the same network segment as the data. Encryption becomes a checkbox exercise when the key material and the encrypted data share threat model assumptions.

This pattern — algorithm confidence paired with infrastructure incompetence — has repeated through every major incident of the past five years. You will now repeat it with post-quantum cryptography.

Here is what will happen: you will select ML-KEM for key establishment. You will deploy it alongside RSA-2048 in a "hybrid" configuration (as NIST recommends, and as vendors are now implementing). You will believe you are secure because you are using a quantum-resistant algorithm. But the key-generation process will be performed by the same vendor, on the same infrastructure, with the same lifecycle management, using the same out-of-band attestation schemes that failed you in the RSA era. The ML-KEM key will be stored in an HSM managed by the same vendor that manages your RSA keys. The rotation schedule will follow the same compliance-driven, detection-based timeline that never actually prevented compromise — it only made incident investigation easier.

The cryptographic binding — the way an ML-KEM session key connects to the infrastructure that uses it — will still be performed by the same control plane that the Scattered Spider operators already learned to forge. Your SIEM will still be unable to distinguish between a legitimate ML-KEM operation and a forged one, because the SIEM operates at the application layer and has no cryptographic observability into the data plane.

PULSE Doctrine: Sovereign Key Substrate

The post-quantum migration must be reframed as an opportunity to build what should have been built at the cryptographic layer decades ago: a zero-knowledge key substrate — infrastructure in which no single actor (vendor, operator, administrator) ever holds both the key material and the ability to use it without continuous, domain-specific, cryptographically-binding consent from the owner of the data.

This means three architectural separations, none of which are optional:

First: Data plane from control plane. Your encryption operations — the generation of ML-KEM session keys, the binding of those keys to specific data entities, the cryptographic commitment to the identity of the recipient — must be performed in a domain whose only input is the plaintext and whose only output is the ciphertext. This domain has no access to any identifier that could connect the operation to an external database, network, or administrative function. It has no secrets. It is not "secure" in the classical sense; it is transparent. Every cryptographic operation it performs is independently verifiable by any party who knows the session key.

Second: Material custody from material usage. ML-KEM private keys must never be held by the infrastructure that uses them for decryption. Instead, they must be held by an orthogonal custody system that never touches plaintext. When the data plane requires decryption, it performs a zero-knowledge proof that it possesses the ciphertext and the correct context (recipient identity, timestamp, data classification). The custody layer validates the proof, performs the ML-KEM decryption in situ, and returns only the plaintext — never the key. The key is never copied, never cached, never accessible to an attacker who compromises the data infrastructure.

Third: Continuous adversarial posture drift. Your key rotation must not be calendar-driven or compliance-driven. It must be continuous and responsive to adversarial indicators. Every ML-KEM key must have an implicit expiration tied to the cryptographic age of the material (NIST recommends key rotation every 90 days for ML-KEM); but more importantly, any deviation from expected cryptographic usage patterns must trigger immediate key revocation and reissuance. If a data-plane component suddenly begins performing decryption operations outside its normal domain (different recipient, different data classification, different temporal pattern), the custody layer revokes the session key, forces re-authentication, and alerts the operator. This is not detection-and-response — it is post-breach resistance. The attacker who forges a session token will not successfully decrypt data because the token's cryptographic validity is bound to usage patterns, not just cryptographic correctness.

Implementation: Domain-Specific Primitives

The technical substrate must be built from three primitives:

Threshold ML-KEM with non-interactive secret sharing. Standard ML-KEM generates a single ephemeral session key from a single key encapsulation. Instead, the session key must be split across three independent custody domains using Shamir's secret sharing, with a threshold of two. No single custody domain can decrypt unilaterally. The three domains operate on separate infrastructure (ideally separate cloud providers, separate networks, separate administrative domains), and any compromise of a single domain does not yield plaintext. This is not new — Shamir secrets have existed since 1979 — but their application to PQC session key derivation is critical to post-quantum infrastructure design.

Commitments with embedded context. Every ML-DSA signature operation must cryptographically commit not just to the data being signed, but to the operational context: recipient identity, timestamp, data classification, source network segment, and consuming application. The signature must be non-transferable — a signature generated to authorize decryption by Application A for Recipient B cannot be replayed to authorize decryption by Application C for Recipient D. This is implemented via pre-signed Merkle commitments, where the signature root incorporates the context hash as an input, not an afterthought. If an attacker exfiltrates a signature, it is useless without both the correct context and the knowledge of how the context is computed (which is secret to the infrastructure operator).

Cryptographic rate-limiting via zero-knowledge proofs. Before the custody layer performs an ML-KEM decryption, the data plane must generate a zero-knowledge proof that it has performed a specific computational task (e.g., SHA-256 of the ciphertext, iterated 2^20 times). This proof costs the attacker real computational time. A legitimate decryption operation costs microseconds; a brute-force attack over 10,000 possible contexts costs seconds per attempt. This forces the attacker into a detection window, where the frequency of failed decryption attempts becomes an observable pattern.

The Regulator's Stake

NIST's cryptographic selections are the easy part. What matters now is whether regulatory bodies will enforce the infrastructure principles that must accompany them.

DORA (which takes effect in January 2025 for EU critical infrastructure) explicitly requires "cryptographic key management" as a material control, but does not specify the architecture. NIS2 similarly requires "secure cryptographic storage" but allows vendor interpretation. The SEC's new cybersecurity rules require disclosure of "cryptographic compromise" within four days, but provide no guidance on what constitutes adequate post-quantum key architecture.

The Australian Prudential Regulation Authority's APRA CPS 234 (released 2023) comes closest: it requires "cryptographic material shall be protected such that no single person or system has access to both the material and the means to use it". This is the right principle, and it should be the global standard.

The gap between NIST's algorithm guidance and actual regulatory enforcement of key architecture is where the post-quantum migration will fail. If regulators allow vendors to deploy ML-KEM using the same key-management practices that failed in the RSA era — keys stored in the same system that uses them, rotation on compliance calendars, detection-based validation — then you will have traded one set of cryptographic assumptions for another, solved nothing, and delayed the real problem another decade.

What This Demands

The post-quantum migration is not a timeline. It is a choice between architecture and compliance theatre. The choice must be made at the infrastructure layer, not the vendor-selection layer.

If you operate critical infrastructure, hold or transfer material value, or hold data subject to long-term confidentiality requirements, your PQC migration must include: zero-knowledge key substrate separation, continuous posture drift, domain-specific cryptographic binding, and multi-party custody models. Anything less is algorithmic replacement without architectural resistance.

Organisations attempting post-quantum migration using conventional HSMs, key-management platforms, and vendor-supplied APIs should expect that the migration will be technically complete, regulatorily defensible, and cryptographically vulnerable to the same categories of compromise that undermined the RSA era.

Qualified operators with critical infrastructure mandates and the architectural authority to enforce substrate-level change should request a technical briefing under executed NDA.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading