The most dangerous word in third-party risk management is "assessed".
Every major breach in the last five years has unfolded inside an organisation that possessed a third-party risk management (TPRM) programme. They had vendor questionnaires. They had risk scoring matrices. They had annual attestations, sometimes audits by the Big Four. They had compliance frameworks—ISO 27001, SOC 2 Type II, NIST CSF alignment statements signed by vendors' general counsels. And yet the adversary moved through their supply chain like a knife through tissue.
The Snowflake tenant cascade of 2024 was not a failure of risk assessment. Snowflake's customers—investment banks, fintech operators, data brokers—had all conducted due diligence. They had requested evidence of security controls. Snowflake had published compliance certifications. What Snowflake's own infrastructure lacked was post-breach resistance. When adversaries compromised Snowflake's customer credentials, the customers' data was readable because it sat on Snowflake's database in plaintext, accessible to anyone holding the session token. The vendor had been "assessed" as compliant. The assessment was worthless because it measured the wrong surface.
The Synnovis laboratory information management incident of 2024—which cascaded across the NHS—followed an identical pattern. Synnovis was a legitimate third party to NHS trusts, operating their pathology workflows. NHS trusts, obliged by NHSX guidance and DSPT attestation requirements to manage third-party risk, had presumably conducted due diligence. Yet when the incident erupted, Synnovis's backup and recovery systems were not segmented from the operating environment. The encrypted backups were stored in the same administrative domain as the production systems. When the adversary obtained domain administrator credentials via a credential-stuffing attack, they could delete backups and force the organisation offline. This was not a gap in the vendor's risk assessment framework. This was a gap in the vendor's architecture—specifically, the absence of cryptographic separation between backup recovery and the entity that authenticates across the estate.
And before Synnovis: the Change Healthcare incident of February 2024, which cascaded across US healthcare providers, exploited a VPN account belonging to a Citrix appliance that had never been patched against CVE-2023-46805. Change Healthcare was an integral third party to the US healthcare billing and claims infrastructure. Thousands of hospitals had contracted with them, performed due diligence, recorded their risk rating in vendor management systems, and uploaded attestations to procurement platforms. The remediation for the incident—the standard industry response—was to strengthen Change Healthcare's own security controls, patch the Citrix box, rotate credentials, and then rescore the vendor's risk rating in the spreadsheet.
That spreadsheet does not defend organisations. It only documents compliance with governance frameworks that measure activity, not outcome.
The Narrative: Control Frameworks Without Architecture
The industry narrative around third-party risk is oriented entirely towards assessment, governance, and contractual enforcement. The NIST CSF (Cybersecurity Framework) Govern function, NIST SP 800-53 SA-2 (Supplier Agreements and Information System Services), and the emerging DORA (Digital Operational Resilience Act, now effective across the EU) all recommend similar toolsets: vendor questionnaires (ISO 27001, SOC 2 Type II, penetration testing reports), scoring matrices that map questionnaire responses to risk ratings, contractual language around incident notification and audit rights, and periodic reassessment.
This framework is now codified across regulator guidance. The FCA's SM&CR (Senior Managers & Certification Regime) extends accountability for outsourced functions to Named Individuals. The PRA's SS2/21 guidance on operational resilience explicitly requires "detailed understanding" of outsourced service providers. NYDFS Part 500 mandates that organisations maintain and document third-party risk assessments. The Australian Prudential Regulation Authority's CPS 234 (Operational Risk Management) requires APRA-regulated entities to maintain "sound and well-documented policies and procedures" for outsourcing arrangements, including controls, security measures, and termination protocols.
And yet: between 2023 and 2025, we have witnessed major breaches at third parties entrusted with the world's pathology records (Synnovis), payment card infrastructure (Change Healthcare), and cloud-hosted analytics (Snowflake). All of these organisations possessed compliance documentation. All had undergone due diligence by sophisticated customers. The TPRM programmes worked as intended. They produced artefacts. They satisfied regulators. They did not prevent the breach.
The architectural problem is this: a TPRM questionnaire measures the policies of the third party, not the substrate. An attestation that an organisation "encrypts data in transit" does not specify whether the encryption is full-pipemesh zero-knowledge encryption (client-side, with the organisation never holding plaintext), or TLS-terminated at the organisation's edge (where the organisation holds the plaintext). A SOC 2 Type II report confirming "segregation of duties" does not tell you whether backup recovery and administration are cryptographically separated, or merely procedurally separated in an access control list (ACL) evaluated by a single system in a single security domain.
The control framework is measurement-theatre. It measures the organisation's statement about security, not the organisation's architecture for resistance.
Why Governance Cannot Substitute for Substrate
The fundamental error in TPRM doctrine is the assumption that risk is a function of the vendor's competence and intentions, rather than a function of the vendor's architecture.
Consider the Optus breach of September 2022. Optus was a large Australian telco with mature security governance. It had ISO 27001 certification. It had a Chief Information Security Officer. It had incident response procedures, a Security Operations Centre, and presumably a well-documented access control policy. Yet adversaries exfiltrated the personal data of nearly 10 million customers by exploiting an unauthenticated API endpoint. The endpoint should never have existed—it was a regression introduced by a developer, visible to any actor with network access. The governance framework—the policies, procedures, compliance documentation—could not prevent a developer from making an architectural mistake.
The Latitude incident of 2023 followed a similar pattern. Latitude is a loan origination systems provider to Australian financial institutions. Its customers had presumably conducted due diligence, reviewed SOC 2 reports, and assessed Latitude's compliance posture. Latitude's own security team had implemented a Web Application Firewall (WAF), intrusion detection (IDS), security information and event management (SIEM). And yet adversaries compromised Latitude's internal development environment, exfiltrated source code and credentials, and cascaded that access into Latitude's customer environments. The governance framework did not prevent the adversary from compromising a development machine and moving laterally into production systems. The framework only ensured that the governance itself was documented.
This is not a failure of TPRM frameworks to ask the right questions. It is a failure of the framework to recognise that there are questions it cannot answer. A TPRM questionnaire cannot tell you whether the vendor's backup recovery plane is cryptographically separated from the control plane. It cannot tell you whether the vendor's development environment is network-isolated from production. It cannot tell you whether the vendor's data-at-rest encryption keys are held by the vendor or by the customer (zero-knowledge). It cannot tell you whether an unauthenticated API endpoint exists.
TPRM frameworks measure governance maturity, which is correlated with vendor size and sophistication—but not with breach resistance.
The Cascade Problem: Third-Party Risk as Supply-Chain Attack Vector
The underlying structural vulnerability that TPRM frameworks cannot address is that third parties serve as entry points into customer environments.
The Change Healthcare incident caused outages at thousands of hospitals across the US. The outage was not because Change Healthcare lost data—it was because the adversary had obtained administrator credentials to Change Healthcare's Citrix appliance and used them to move laterally into hospital networks that integrated with the Change Healthcare billing platform. Hospitals had not signed up for Citrix administration risk; they had signed up for claims processing. The vulnerability cascaded because the third party's network was not segregated from the customer's network.
Similarly, in the Synnovis incident, NHS trusts integrated with Synnovis's pathology platform. The platform consumed credentials from the NHS trust's Active Directory (likely via LDAP or a similar protocol). When adversaries obtained credentials to a Synnovis administrator account—via credential stuffing, a technique that requires no zero-day exploit, only password reuse—they could authenticate to Active Directory as a trusted system and move laterally into the NHS trust's backup infrastructure.
This is the cascade problem. A third party is a network graph. Every integration point between the third party and the customer is a potential lateral movement vector. Every credential the third party holds that authenticates to the customer's systems is a potential pivot point. TPRM frameworks can require contractual language around "security controls" and "network segregation", but they cannot verify that segregation exists—and if segregation is procedural rather than cryptographic, it is not segregation at all.
The only architectural response is to assume third-party compromise. The customer's risk strategy cannot be "trust the third party's governance". It must be "design the customer's systems to fail safely when the third party is compromised".
The Zero-Knowledge Principle: Data Substrate Independence
The Snowflake incident provides the clearest example of a fundamental architectural error that no TPRM questionnaire would have detected.
Snowflake customers stored encrypted data in Snowflake's cloud. But Snowflake held the encryption keys. This is a logical architecture—Snowflake needs to decrypt the data to query it—but it is catastrophic from a post-breach perspective. When Snowflake customers' credentials were compromised, the attacker could authenticate to Snowflake as the customer and read all the customer's data. The encryption was at rest; it was not in use. The data was readable in plaintext to any authenticated user.
The alternative architecture is zero-knowledge encryption: the customer holds the decryption key, and Snowflake's systems never hold plaintext. The query engine operates over encrypted data—via techniques like Order-Preserving Encryption (OPE), Deterministic Encryption (DE), or Searchable Symmetric Encryption (SSE)—or via functional encryption where specific computation results are returned in plaintext but the underlying data remains encrypted.
This is not a control-plane issue (access policy, RBAC, session management). This is a data-plane issue (what state the data is actually in when an authenticated session is established).
No TPRM questionnaire asks: "When I authenticate to your platform as my authorised user, what portion of my data is readable in plaintext by your systems?" The question is too deep. It requires understanding the vendor's cryptographic architecture, not the vendor's policy framework.
Yet it is the essential question. It is the difference between "we have encrypted your data" and "your data is in a state where we cannot read it".
Adaptive Posture: Continuous Adversarial Drift
The Medibank incident of 2022 exposed an organisation with mature governance—Medibank was listed on the ASX, regulated by ASIC and APRA, subject to PCI DSS compliance. Medibank maintained security policies, encryption controls, and a security operations centre. Yet adversaries obtained credentials, moved laterally through the network undetected for months, and exfiltrated the personal health records of nearly 10 million Australians.
The remediation, post-incident, followed the standard playbook: investigation by external forensics firm, implementation of additional detection controls (extended EDR deployment, SIEM rule tuning), enforcement of multi-factor authentication, incident notification under Privacy Act 1988 (Cth), and victim identity protection services.
What this remediation could not address is that the adversary had obtained a foothold inside Medibank's network. The detection-and-response model assumes that a well-tuned SIEM, EDR agent, and incident response team will eventually find the attacker. In practice, the adversary has a timeline advantage. The attacker can move slowly, use "living off the land" techniques that blend with legitimate system administration, and exfiltrate data across months. Detection, if it occurs, occurs after exfiltration.
The only structural response is to assume active compromise and design the network to be poisonous to lateral movement. This is not a matter of running more detection rules. It is a matter of cryptographic segmentation: every administrative action requires proof-of-knowledge (private key, not shared credential), every data access requires re-authentication to a separate domain, every backup is isolated in its own cryptographic boundary (not just separate storage, but separate trust domain).
This is what we mean by "adaptive posture"—not the posture itself (firewall rules, RBAC policy), but continuous adjustment of the cryptographic boundaries that define what trust domains exist and how transitions between them are enforced.
The Contractual Trap
Regulators and procurement teams have standardised on a particular TPRM artifact: the vendor risk score, usually a numeric rating (1-5, or 0-100) that reflects the vendor's assessed risk level. This score is then recorded in a vendor management system (VMS)—tools like Vendict, Security Scorecard, or even just a spreadsheet—and reviewed periodically.
The trap is that this score is treated as causal—as though assigning a score actually reduces risk. In practice, the score is merely correlational. A vendor with a high SOC 2 Type II rating, multi-year incident history, and customer references will receive a low risk score. But that vendor could still have an unauthenticated API endpoint (Optus), a compromised development environment (Latitude), or plaintext-readable data (Snowflake).
The contractual trap is subtler. Many procurement teams now require vendors to carry cyber insurance, to maintain specific compliance certifications, and to notify customers within defined RPO/RTO windows. These clauses are unenforceable in practice. Cyber insurance policies have broad carve-outs for supply-chain attacks. Compliance certification is a snapshot in time. Notification windows assume that the vendor has detected the incident—and detection, as we have seen, is unreliable.
The only contractual obligation that matters is data sovereignty: the customer owns the encryption keys, the vendor cannot access plaintext data, and the customer can terminate the relationship by deleting their own keys.
Toward Post-Breach Resistance in Supply Chains
The PULSE doctrine for third-party risk is inverted from the standard TPRM approach. Instead of asking "Can we trust the vendor?", it asks "How do we survive the vendor's compromise?"
This requires three architectural principles:
Zero-knowledge substrate: Customer data is held by third parties in a state where the third party cannot read it. This is not encryption-at-rest with the vendor holding keys—it is encryption-in-use, where the decryption capability is exclusively available to the customer (or to a cryptographic arbiter the customer controls).
Cryptographic separation of trust domains: Integration between customer and vendor systems is not mediated by shared credentials (username, password, API key) or a single authentication domain. Each domain transition requires proof-of-knowledge and fresh authentication to a separate cryptographic boundary. Backup recovery is cryptographically isolated from daily operations. Development environments are isolated from production. Administrative actions require separate credentials and session management.
Continuous adversarial drift: The customer does not score the vendor once and then review annually. Instead, the customer's threat model explicitly assumes vendor compromise and continuously adjusts the cryptographic boundaries that would contain that compromise. This is measured not by metrics (days to detect, MTTR) but by architectural invariants (what data is readable if this specific credential is compromised, what operations are possible if this specific domain is compromised).
These principles require deep technical engagement with vendors—not questionnaire assessment, but architectural review. They require customers to specify, not what the vendor should do, but what the vendor should be unable to do. They require procurement to shift from "what compliance certifications does this vendor have?" to "if we give you encrypted data, can you prove that you cannot read it?"
This is harder than the current TPRM model. It is also the only model that produces post-breach resistance rather than pre-breach optimism.
Operators managing mission-critical data infrastructure who recognise the architectural failure mode in their own vendor engagement processes are invited to request a technical briefing under mutual NDA.
Request a briefing under executed Mutual NDA.
PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.
Request Briefing →