The Inside Job Nobody Planned
The insider threat industry has spent two decades hunting for the traitor in the room—the disgruntled engineer, the compromised administrator, the mercenary contractor—when the real architectural failure is that organisations have built systems where ordinary, conscientious people routinely move data in ways that expose it to theft, loss, or state-level compromise.
This distinction matters because it shifts where your engineering attention must land. If insider threat is primarily malice, you hire investigators, deploy User and Entity Behaviour Analytics (UEBA), restrict privileged access, and pray. If it is primarily accident—a developer committing AWS credentials to GitHub, a financial analyst exporting a spreadsheet to a personal OneDrive, a system administrator reusing a password across production and a phishing target—then you must architect systems where the substrate itself makes careless or coerced disclosure impossible, regardless of individual intention or judgment.
The industry narrative treats this as an enforcement problem. The evidence shows it is a design problem.
What the Industry Data Actually Says
The 2024 Verizon Data Breach Investigations Report analysed 6,104 confirmed data breaches and found that human error featured in 68% of incidents—a figure that has remained remarkably stable since 2012. Within that cohort, the distinction matters: malicious insiders accounted for roughly 8% of breaches, whilst accidental disclosure accounted for 25–30%. More telling still, the path to breach in most cases was opportunistic rather than planned: a backup left in an S3 bucket without authentication controls, credentials hard-coded in a public repository, a database snapshot accidentally made world-readable, a SaaS tenant configuration error that exposed another customer's data.
The Snowflake tenant cascade of early 2024 crystallised this pattern. Attackers exploited stolen credentials belonging to service accounts and individual users—obtained via infostealer malware and phishing—to access customer databases directly. The structural failure was not the existence of privileged accounts (necessary) but rather that the system's architecture made it possible for a single compromised credential to grant direct, unmediated read access to petabytes of customer data. No SIEM log, no SOAR playbook, no EDR agent prevents that if the authentication token itself is valid. Snowflake later implemented additional session validation controls, but the underlying design assumption—that access control + password hygiene = protection—had already proven false at scale.
The M&S Scattered Spider attack of January 2025 followed a similar arc: attackers obtained initial access through employee credential compromise, then moved laterally without detection for weeks because the network architecture permitted it. The investigation (conducted by Deloitte and published via regulatory filing) revealed that even with EDR deployed across the estate, the attackers were able to pivot through legacy segmentation boundaries and exfiltrate data because the control-plane (identity and authorisation) remained separate from, and unenforced within, the data-plane itself. Logs recorded the movement—but logs are forensics, not barriers.
Regulatory pressure has begun to encode this insight. The SEC's 4-day breach notification rule (in effect since 1 February 2024) forces disclosure timelines so aggressive that organisations must assume they will be detected post-breach, not pre-breach. The UK NIS2 implementing framework (Part 1 and Part 2 duties, effective May 2025) demands "technical and organisational measures" that achieve a security posture "equivalent to, or better than, that of a typical organisation operating in the same or a similar sector." The key word is equivalent—not superior, just typical. Yet typical organisations are, by definition, compromised routinely. NIS2 compliance cannot therefore be achieved through traditional controls; it requires architectural separation of privilege and data.
The Control Stack Has Reached Its Ceiling
Enterprise security spending on detection and response has reached $18 billion annually in the United States alone (Gartner, 2024). SIEM, SOAR, EDR, XDRR, UEBA, DLP, API gateways, Cloud Access Security Brokers (CASB), inline proxies, and network segmentation are deployed in layer upon layer across mature organisations. The Synnovis attack on the NHS (May 2024), which deployed LockBit 3.0 ransomware via compromised RDP credentials, succeeded despite the NHS organisation having EDR, firewalls, and network intrusion detection systems in place. The attacker moved laterally through the environment for six days before encryption began, during which dozens of alerts were generated—yet none triggered automated intervention.
This is not a tuning problem or a log ingestion problem. It is an architectural one.
The legacy stack assumes a perimeter-and-gate model: defend the boundary, detect anomalies within, and respond when threats cross a threshold. But that model breaks when:
- Credentials themselves become the perimeter. A valid authentication token, obtained via phishing, infostealer, or credential stuffing, is indistinguishable from a legitimate login to the access-control system.
- Data and computation are entangled. If a person with rights to read data also has rights to exfiltrate it (via SCP, API, cloud bucket, email, or USB), then protecting the account means limiting what the person can do—not what the data can be exposed to.
- Adversarial pressure is continuous. The attacker probes repeatedly, iterates their tools, and absorbs lessons from each failed intrusion. The defender tunes rules, adjusts thresholds, and grows the detection workload by 30–40% annually. This is an asymmetric race that the defender loses over time.
This is why modern ransomware operators (Alphv, BlackCat, LockBit 3.0, Cl0p) now routinely operate for weeks or months inside compromised networks—not because they evade detection, but because detection without enforcement is forensic noise.
The PULSE Architectural Thesis
The corrective principle flows from a simple inversion: if accidental disclosure is the primary mode, then the architecture must be designed so that accident is materially impossible regardless of the intent or competence of the operator.
This requires three interlocked design changes:
Zero-knowledge substrate. The system must be architected so that no single actor—user, administrator, service account, or process—ever has unmediated access to data. Instead, data remains encrypted, fragmented, or segregated such that disclosure requires not merely one compromised credential or one lateral move, but the compromise of multiple independent systems simultaneously. This is the principle behind threshold cryptography, secret sharing schemes, and Byzantine-resistant consensus—not as a cryptographic curiosity but as an operational necessity.
In practice: instead of a database holding plaintext customer records accessible to an authenticated API service, the substrate separates the query layer (which issues transformed, aggregated, or redacted results) from the storage layer (which never exposes the underlying data). The authentication token grants permission to query, not access to data. The query engine executes under a cryptographic proof of validity, visible to the access-control system, but not controlled by it.
Control-plane and data-plane fusion. Legacy architectures treat access control (who can do what) as a policy layer, separate from the data-plane (where actions execute). The Scattered Spider and Synnovis breaches succeeded because once an attacker obtained a valid authentication token, the data-plane had no mechanism to reject the subsequent action—exfiltration, lateral movement, encryption—even though the action violated the intent of the access policy.
Instead, the control-plane must be fused into the data-plane: every operation is pre-authorised by a distributed, tamper-evident control layer before execution is possible. A database query does not return data; it returns only the result of applying a pre-validated transformation. A file move does not transfer data; it transfers only a cryptographic proof-of-transfer, which the receiving system independently validates against its own control policy.
Adaptive adversarial drift. The architecture must assume that adversarial posture—both external and insider—will improve continuously. Rather than tuning detection rules annually, the system's access patterns, encryption keys, and data-plane routing must shift on a schedule decoupled from human management cycles. This is not security by obscurity; it is security by continuous architectural instability that prevents an attacker from building a stable pivot point.
In practice: the cryptographic keys used to seal data-plane operations rotate independently of human key management. The routing topology of data between services shifts based on adversarial signals. The authentication trust chains incorporate continuous behaviour baselining, not as a UEBA overlay but as a native function of the control-plane itself.
Design Principles in Action
Consider a financial services scenario: a trader requires access to foreign exchange rates and client positions to execute a trade. In the traditional model, the trader's workstation is authenticated to a financial data system, granted read access to relevant data, and it is the trader's responsibility (and the DLP system's responsibility) to ensure the data is not exfiltrated.
Under the PULSE model, the scenario changes. The trader's request triggers a query through a zero-knowledge interface that:
- Transforms the query into a cryptographic proof of intent, signed by the trader's device (under continuous behaviour validation).
- Submits the proof to a distributed control-plane, which validates that the proof matches pre-authorised access policies.
- Returns not the raw data, but a redacted, aggregated, or synthetic dataset that is sufficient for the task but insufficient for reconstruction of the underlying positions.
- Logs the query, the authorisation, and the result-set signature—not as a detective record, but as a cryptographic audit chain that proves integrity.
If the trader's account is compromised by phishing, or if the trader is coerced to export data, the system remains resistant because the underlying data is never available to the trader's session—only the transformed result is. If the trader attempts to copy the result-set and exfiltrate it, the data is either dynamically obfuscated (using differential privacy or data poisoning techniques) or watermarked with such fine-grained provenance that any reconstruction attempt is detectable and attributable.
This is not "zero trust"—that term has become marketing cargo-cult, referring merely to continuous authentication. This is zero-exposure: the substrate is architected so that exposure requires the compromise of multiple independent systems, each of which must be independently architected to prevent that compromise.
The Regulator's Perspective
NYDFS Part 500, the FCA's Senior Managers & Certification Regime (SM&CR), APRA's CPS 234 (Information security), and DORA's Operational Resilience framework all converge on a single demand: organisations must demonstrate that their control environment is inherent to the technology, not a policy overlay that depends on human compliance.
When the SEC charges an organisation with data security failings (as it did with SolarWinds in December 2023, resulting in $32 million in civil penalties, and with firms involved in the Change Healthcare attack of February 2024), the regulator's focus is on whether the breach was preventable through architecture. In the SolarWinds case, the SEC noted that SolarWinds used default credentials and failed to implement multi-factor authentication on its Orion platform—failures that no SIEM, no EDR, no incident response team could have overcome. The fine was not for failing to detect; it was for failing to design.
This regulatory reading is correct. An organisation's security posture is only as strong as its most critical architectural failure. And most critical failures in insider-threat scenarios are structural, not tactical.
Conclusion: The Design Frontier
The insider threat problem, as conventionally framed, cannot be solved by traditional controls because it is not fundamentally a threat problem—it is a design problem. The accidental disclosure, the credential compromise, the lateral movement, the exfiltration: all of these are natural consequences of an architecture that concentrates access and data in the same systems, then relies on enforcement (UEBA, DLP, EDR, logs) to prevent misuse.
That model is exhausted. The frontier—the only defensible frontier—is systems designed so that accidental disclosure is architecturally impossible because the data-plane itself is fused with the control-plane, secrets are distributed across independent systems, and adversarial pressure is met with continuous structural instability.
This requires engineering rigour, cryptographic primitives, and willingness to question inherited infrastructure. It is not faster than layering another SIEM. But it is the only model that scales to the security demands of organisations operating under regulatory pressure and continuous adversarial pressure simultaneously.
Qualified operators interested in exploring how zero-knowledge substrate and control-plane fusion apply to specific domain-critical infrastructure are invited to request a technical briefing under executed Mutual NDA.
Request a briefing under executed Mutual NDA.
PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.
Request Briefing →