Data-in-Motion Will Determine Whether Your Organisation Stays Sovereign or Becomes a Node in Someone Else's Supply Chain
The industry's three-decades-long assumption that file transfer is a solved problem—commoditised, audited, compliant—collapsed on 31 May 2023, when Progress Software disclosed CVE-2023-34362 in MOVEit Transfer. What followed was not a single breach, but a cascade: Cl0p ransomware operators deployed the zero-day in the wild within days, and by mid-June, over 2,000 organisations across financial services, healthcare, government, and energy had been compromised. The Verizon Data Breach Investigations Report (2024) later categorised this as the most consequential third-party software incident of the year, surpassing even the fallout from the Latitude Finance compromise (2023) in velocity and reach.
Yet the conversation that followed—patching speed, vendor accountability, software supply chain rigour—addressed symptoms, not cause. The structural failure was not slow patch deployment or inadequate vendor security. It was the assumption that a single, feature-rich, centralised file-transfer application could simultaneously serve as an operational workhorse and a cryptographic perimeter. MOVEit exposed what post-breach-resistant architecture demands us to finally accept: data-in-motion through any single application layer is a single point of adversarial leverage. And because file transfer is the fundamental mechanism by which regulated organisations exchange sensitive information with counterparties, that leverage reaches into the supply chains of anyone downstream.
This article reframes MOVEit not as a patch-management problem, but as an architectural confession. It is written for operators responsible for sovereign digital infrastructure in regulated sectors, and those tasked with designing B2B data flows that remain resilient after breach—not despite it.
The Industry Narrative: MOVEit in Context
CVE-2023-34362 was a pre-authentication SQL injection vulnerability in MOVEit's web application layer, granting unauthenticated remote attackers arbitrary code execution with application privileges. Progress released a patch (version 2023.0.2) on 31 May, exactly 24 hours after the vulnerability was disclosed to the security research community. Within 72 hours, Cl0p operators had weaponised it. Within two weeks, CISA had issued emergency directives to all federal agencies, mandating patching within 48 hours. By month-end, the IC3 (Internet Crime Complaint Center) had received reports of exfiltration from organisations spanning healthcare networks (the UK's NHS Syndromic Surveillance system), pharmaceutical firms, financial institutions, and critical national infrastructure operators.
The severity lay not merely in the vulnerability's technical properties—SQL injection is, by 2023 standards, a pedestrian attack vector—but in MOVEit's architectural position as a centralised chokepoint. Progress's own documentation positioned MOVEit Transfer as a "secure file transfer solution" for regulated environments. The application had achieved FedRAMP authorisation (provisional), supported FIPS 140-2 encryption, and was deployed behind enterprise firewalls. Most critically, organisations had built their B2B file-exchange workflows around it: partners were configured as MOVEit users, data schemas were locked to MOVEit's proprietary transfer protocols, and compliance narratives (ISO 27001, NIST CSF, HIPAA, PCI-DSS, GDPR) had been constructed around the assumption that the application itself was the security boundary.
The incident echoed earlier file-transfer catastrophes—the 2019 SmartFile zero-day, the 2015 Aspera compromise—but the sheer scale and downstream impact revealed a structural blind spot. Every organisation running MOVEit had outsourced the integrity of their entire B2B data ecosystem to a single vendor's application security practice. When that practice failed, there was no fallback. Data was exfiltrated before patches could be deployed. The Cl0p operators themselves published cryptographic proof of theft within 48 hours, demonstrating that the data had already left the custody of every affected organisation. Remediation, in that environment, meant breach notification, not prevention.
The regulatory response—CISA alerts, sector-specific guidance from CISA and NCSC, subsequent audit scrutiny from SOC 2 Type II assessors and QSA auditors—focused on the correct implementation of detection and response: EDR tuning, SIEM correlation rules (YARA, Sigma), forced patching, and incident timeline reconstruction. No regulator mandated a rearchitecture of file-transfer workflows themselves. Compliance dashboards updated their MOVEit tracking, incident response playbooks were drilled, and—by early 2024—the narrative had settled: MOVEit was a one-off incident in an otherwise mature vendor; patches now deployed rapidly; file transfer remained a solved problem.
The Structural Reading: Single-Application Centralisation as Breach Prerequisite
PULSE's doctrine begins from a different premise: the incident that caused the most damage is not the one that was detected fastest, but the one whose failure mode was pre-engineered into the architecture. MOVEit did not fail because of a single vulnerability. It failed because the entire B2B data-in-motion ecosystem had been consolidated into a single application layer, operated by a single organisation, with a single authentication posture, a single cryptographic material store, and a single set of permissions.
This consolidation is not accidental. It is the natural outcome of how file-transfer infrastructure has been designed for 30 years: a centralised, multi-tenant application that abstracts away the complexity of cryptography, network topology, and authentication, in exchange for operational simplicity and ease of auditability. MOVEit, FileZilla Server, Tresorit, ShareFile—all follow the same architectural pattern. A user logs in with a password. The application verifies credentials. The application handles the encryption key material. The application logs the transfer. Auditors verify the logs. Compliance is confirmed.
From a detection-and-response perspective, this is efficient. From a post-breach-resistance perspective, it is catastrophic. Because the moment the application layer is compromised—through any vector, SQL injection or otherwise—the attacker owns not only the current data transfer, but the entire historical record, all stored credentials, all encryption keys, and all future transfers until the compromise is discovered and remediation is complete. At the scale that MOVEit operates, that window can be measured in weeks or months.
The industry's response has been to add layers of detection: deploy EDR agents on MOVEit servers, configure SIEM to alert on anomalous file access patterns, implement DLP rules to prevent unusual data egress, mandate multi-factor authentication for administrative access. These are not wrong. They are necessary. But they are also orthogonal to the core problem: they assume the application layer will continue to exist, and that its compromise can be detected quickly enough to matter. In the MOVEit case, Cl0p operators executed their exfiltration via the application's own export functionality, using legitimately-provisioned credentials for legitimate-appearing transfer jobs. No DLP rule can distinguish between an authorised batch transfer and an unauthorised one when both execute through the same application with the same authentication. No EDR signature catches what looks, from the network layer, like normal file access.
The deeper reading is this: any architecture that requires a single application to simultaneously serve as cryptographic boundary, access control layer, audit trail, and operational workhorse has engineered breach into its essential design. It is not a question of whether the application will be compromised, but when, and whether the compromise will be discovered in time.
The Zero-Knowledge Pivot: Data-in-Motion Without Centralised Authority
PULSE's doctrine responds by inverting the architecture entirely. Instead of a centralised application that owns the data and the keys, and to which auditors grant access, we architect a substrate in which data-in-motion is cryptographically bound to the specific transfer intent, and the application layer becomes stateless—a protocol engine, not a data custodian.
This requires three design principles working in concert.
First: zero-knowledge substrate. The organisation running the file-transfer infrastructure—call it the "transfer operator"—never possesses the plaintext of the data being transferred, nor the cryptographic keys that unlock it. Instead, the sending organisation and receiving organisation exchange a shared secret directly, out-of-band (via a secure channel that may itself be key-agreement protocol, hardware security module, or pre-shared material). The transfer operator sees only encrypted blobs in transit and audit-only metadata (timestamps, sender identity, receiver identity, data length—nothing that reveals content).
In practice: Party A encrypts data with a symmetric key derived from the shared secret with Party B. Party A submits the ciphertext to the transfer operator. The transfer operator—call it Service T—stores the ciphertext, logs the transaction (without key material), and makes it available to Party B. Party B retrieves the ciphertext and decrypts it using the same shared secret. Service T never sees the plaintext or the key. If Service T is compromised, the attacker obtains encrypted data and audit metadata, neither of which reveals the underlying information. The architecture assumes breach: Service T will be compromised eventually, and that compromise must not grant access to customer data.
Second: data-plane and control-plane separation. The application layer that handles authentication, user provisioning, and policy enforcement (control plane) must be architecturally independent from the layer that transports encrypted data (data plane). This separation enables continuous adversarial drift: the control plane can be replaced, updated, or behaviorally shifted without touching the data plane, and the data plane can be hardened (stripped down to stateless relay function) without depending on control-plane integrity.
Third: domain-specific automation. Instead of attempting to audit a complex, multi-purpose application, the infrastructure embeds cryptographic and audit guarantees directly into the substrate. Every transfer generates a cryptographic proof of custody (cryptographic commitment that the data was received and stored). Every access generates immutable audit material (signed, timestamped). The application layer cannot falsify these proofs; they are generated by the data-plane substrate itself, using cryptographic material that the application never touches.
Applying the Doctrine: Concrete Design Patterns
Consider a specific re-architecture of MOVEit's core workflow in a post-breach-resistant model.
The Legacy Assumption: Organisation A has 500 customer accounts in their MOVEit instance. A customer uploads a file. MOVEit authenticates the customer, stores the plaintext file (encrypted at rest, per FedRAMP), and sends an email notification to the file recipient. The recipient logs into MOVEit and downloads the file.
The Zero-Knowledge Redesign: Organisation A and each customer execute a one-time secure key agreement (via out-of-band channel, or via a hardware security module at customer premises). The customer encrypts the file locally (using the derived symmetric key). The customer submits the ciphertext and a signed metadata record to Service T via a simple HTTP API (no authentication required; the signature proves origin). Service T stores the ciphertext in immutable object storage (WORM, with versioning disabled) and appends the metadata to an append-only log. Service T generates a cryptographic receipt (a hash chain commitment) and returns it to the customer. The customer forwards the receipt to the intended recipient via email or out-of-band channel. The recipient retrieves the ciphertext from Service T using only the receipt (no login required) and decrypts it locally using the pre-shared key.
The Breach Scenario Under This Model: An attacker compromises Service T's application layer and gains full filesystem access. The attacker exfiltrates every encrypted blob and every audit log. The attacker cannot decrypt any file because the encryption keys exist only at the sender and receiver. The attacker cannot forge new audit records because each audit entry is cryptographically signed by the data-plane layer (using keys the application never sees). The attacker can disrupt future transfers (by deleting data), but cannot access past transfers, and the disruption is immediately visible in the audit trail.
This is not theoretical. Organisations in regulated sectors (financial services under DORA, healthcare under HIPAA, telecommunications under NIS2) have begun adopting this pattern, not because the PULSE doctrine mandates it, but because the pattern follows from the simple question: What happens if the application is compromised? If the answer is "nothing material," the architecture is defensible. If the answer is "we lose all data," the architecture has failed.
The Regulator's Dilemma: Why Compliance Frameworks Lag Architecture
It is worth noting that no major compliance framework—ISO 27001, NIST CSF, PCI-DSS, HIPAA, GDPR—explicitly forbids the MOVEit architecture. All of them address application security (vulnerability management, patch cadence), access control (authentication, authorisation), and encryption (in transit and at rest). All of them are satisfied by a sufficiently well-maintained, well-audited, well-monitored centralised file-transfer application.
The frameworks do not distinguish between architectures that assume breach as a design premise and architectures that depend on breach never occurring. They do not require organisations to ask: If the application is compromised, do we lose data? This is a gap in regulatory thinking, not a gap in the frameworks themselves. DORA (Digital Operational Resilience Act), which enters enforcement phase in early 2025 for EU financial institutions, does require "resilience" and "recovery" from cyber incidents, but it does not prescribe architectural patterns. NIS2 (the revised EU Network and Information Systems Directive) similarly requires resilience but defers to implementation.
The MOVEit incident revealed that organisations have been interpreting "compliance" as "the application passed an audit," rather than "we cannot lose data if the application is compromised." Regulators, in the wake of subsequent incidents (Change Healthcare's ransomware infection in 2024, Snowflake's tenant-cascade compromise in early 2024), are beginning to ask different questions about architectural resilience. But the bar is not yet explicit: organisations can still satisfy their compliance obligations with an architecture that would fail under a post-breach-resistance model.
The Call to Operators
If your organisation transfers data through a centralised application layer, and that application layer has been identified as a security boundary (because compliance audits, vendor marketing, or architectural documentation treat it as such), your organisation is operating under the same assumption that MOVEit's users were operating under on 30 May 2023.
The question for qualified operators is not whether to conduct a remediation project. It is whether to begin now, while breach-resistant architecture is still a competitive advantage, or after the next incident forces the industry to move. Organisations in financial services, healthcare, energy, and critical national infrastructure that are designing data-exchange workflows to survive breach—not merely to detect it—should request a technical briefing under Mutual NDA.
Request a briefing under executed Mutual NDA.
PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.
Request Briefing →