The laptop is not the perimeter—it is the evidence.

The cybersecurity industry treats physical security as a compliance checkbox: enforce laptop encryption (NIST SP 800-53 SC-28, FCA SYSC 13.1R), mandate smart-card logon, deploy Mobile Device Management (MDM) via Intune or Jamf, audit badge logs. These controls are correct but catastrophically incomplete. They assume the threat model is loss or theft. They are not. The threat model is adversary access while the operator remains unaware the device has been compromised—and the forensic record of that compromise arriving weeks or months after the breach has propagated through the organisation's data plane.

The Scattered Spider intrusions into M&S and other UK retail and logistics environments in 2025 leveraged physical reconnaissance, credential harvesting from discarded documents, and social engineering to obtain legitimate network access credentials—not through phishing at scale, but through low-friction human compromise in and around occupied offices. The Change Healthcare attack (UnitedHealth subsidiary, December 2023) exploited credential reuse and weak MFA practices among field technicians and remote staff. Neither attack required zero-day exploitation. Both succeeded because the organisation's operational security posture—the practices and architectures governing how people, devices, and facilities interact with sensitive infrastructure—had not been integrated into threat model or design.

This is the blind spot at industrial scale: physical security is treated as a facility concern, digital security as an IT or engineering concern, and operational security (OPSEC) as a paper process nobody reads. The consequence is a critical structural failure that no EDR solution, SIEM rule, or MFA enforcement can repair.

The Compliance Narrative: Encryption, MDM, and the Illusion of Containment

For the last decade, the regulatory consensus has been architecturally sound in principle but incomplete in practice. The NIST Cybersecurity Framework (CSF) 2.1 Govern function includes GV.RM-1 (Risk management processes and strategies), GV.RM-5 (Organisational risk tolerance), and the MAP function requires device inventory and configuration management (MAP.CM-3). The ISO 27001:2022 Annex A.6 (People) and A.8 (Physical and environmental controls) prescribe access controls, background checks, and physical security. DORA (Directive 2022/2554) demands operational resilience testing; NIS2 (Directive 2022/2555) mandates supply-chain risk and incident notification within 72 hours. APRA CPS 234 (Australia) requires access controls for critical data. All of these are reasonable baseline expectations. None of them address the structural problem.

The standard remediation stack is well-defined: full-disk encryption (BitLocker, FileVault, LUKS), mandatory smart-card authentication, device compliance checks before network access, MDM policies enforcing screen-lock timeout (typically 15 minutes), and badge-access logging for physical facilities. When a device is lost—as happened frequently during the COVID-19 remote-work surge—the organisation can disable the device remotely and, in theory, prevent offline decryption. When a device is found by an adversary, the attacker faces a non-trivial cryptographic barrier.

But this narrative breaks at a critical juncture: the device is only compromised if it is powered down, locked, or in a state where the adversary cannot interact with it while the operator is absent. The Snowflake tenant cascade of 2024 began with credential compromise, not device theft. The Latitude Financial breach (April 2021, disclosed September 2021, affected 9.2 million Australians) exploited unpatched internet-facing Pulse Secure VPN appliances and compromised credentials—again, not device physical compromise. The underlying threat model in all these cases is: an adversary with legitimate or elevated credentials accessing the data plane without detection.

The laptop encryption and MDM controls are not wrong. They are insufficient. They optimise for the scenario where the adversary has the device and the operator does not. They do not optimise for the scenario where the adversary has the device, the credentials, and the network access—and the operator is still using it.

Why Physical OPSEC Remains Invisible to Detection-and-Response

The enterprise security operations centre (SOC) is optimised to detect anomalies in the data plane and control plane. EDR tools like CrowdStrike Falcon, Microsoft Defender for Endpoint, and SentinelOne log process execution, file writes, registry changes, and network connections. SIEM platforms (Splunk, Elasticsearch, ServiceNow, Datadog) aggregate events from firewalls, proxies, DNS, and access logs. Threat intelligence teams maintain YARA rules, Sigma rules, and detections modelled on MITRE ATT&CK technique IDs. When an intrusion occurs—SolarWinds (December 2019, attributed Cozy Bear), MOVEit Transfer zero-day cascade (May 2023, attributed Cl0p), Change Healthcare ransomware (December 2023)—the post-incident review invariably finds logs that, retrospectively, pointed to the breach. But the logs only record what the device did, not who had access to it, or whether the device was in an unexpected physical location, or whether the operator was sleeping while the device was active.

This is a blindness by design. The SOC has no real-time visibility into: whether a laptop is in the employee's home, the office, a café, or an airport lounge; whether the employee is physically present and conscious; whether the laptop has been sleeping and the login occurred via a cached credential; whether the device screen is on or off; whether the keyboard and trackpad have been used; whether the device is connected to an untrusted network; or whether a second person is handling the device.

Standard endpoint detection tools collect these signals poorly or not at all. They assume the device is the operator. They do not model the device as a discrete asset with a location, an owner, and an intended use-case. The result is that an adversary with physical access and a valid credential can operate the device during low-activity periods (evenings, weekends, regional holidays) and exfiltrate data—and the EDR logs will show normal process execution, legitimate file access, and expected network connections. The attack will appear to be the operator's work.

The Scattered Spider case illustrated this perfectly: the operators gained access to legitimate employee credentials through social engineering and physical reconnaissance. Once they had the credentials, they logged in from their own infrastructure. The logs showed a human user accessing resources. Detecting that human user as an adversary would have required real-time verification that the logged-in user was actually the credential owner, or that the login occurred from an expected location and time. Neither check was present.

The Structural Failure: Operational Security Divorced from Architecture

The root cause is architectural, not procedural. Most organisations treat OPSEC as a set of behaviours (don't leave your laptop unattended, don't discuss sensitive work in public, don't take notes home) or a compliance matrix (background checks, office badge logs, clean-desk policies). OPSEC is not treated as a substrate for authentication and authorisation. There is no feedback loop between the physical and digital domains. A badge swipe provides no cryptographic material for data-plane access. A location signal does not constrain what resources a credential can unlock. A device-presence assertion does not inform threat scoring.

Under the PULSE doctrine, operational security is engineered into the architecture, not written on a poster.

This means:

Zero-knowledge substrate for device state. Rather than trusting the endpoint to report whether a user is present, whether the device is locked, or whether it is in a "known good" location, the organisation must assume the device cannot be trusted and design authentication and authorisation to not require that trust. A credential unlock should depend on real-time multi-factor signals that the organisation controls: a geofence (within the office or a specific VPN address), a time-based policy (9am–6pm, Monday–Friday, regional timezone), a push notification to a secondary trusted device, and a hardware key attestation. If any of these signals is unavailable, the credential cannot be used. The data plane requires proof that the user is present and is aware, not just that the credential is valid.

Continuous adversarial posture drift. An adversary who obtains a credential learns its behavioural patterns. They learn when the operator typically accesses data, from where, and with what frequency. They learn which resources are used regularly and which are rarely touched. Under a static access model, the adversary can mimic these patterns indefinitely. Under a drift model, the organisation continuously modifies the shape of access: resource locations change (data is served from different infrastructure), authentication factors rotate without warning, geofence boundaries shift, time-based policies adjust based on calendar and timezone drift. The adversary who has learned the old pattern must learn a new one every week. Detection becomes feasible because the adversary cannot maintain the learned pattern; they either access resources in an obviously different way or they do not access them at all.

Physical presence as a cryptographic primitive. A badge reader, a proximity beacon (ultra-wideband or Bluetooth), or a biometric scanner should be integrated into the login ceremony—not as an additional step that can be bypassed or phished around, but as a component of the cryptographic computation itself. A login to the data plane should require proof that a hardware credential (the badge, the biometric matcher) was used within the last 60 seconds and within 10 metres of a known location. This is not user-experience theatre; it is genuine threat model hardening.

Domain-specific automation at the data layer. Rather than relying on EDR, firewall rules, and SIEM alerts to detect anomalous data access, the organisation embeds access control at the storage layer. A file should not be readable by a process unless both the process identity and the real-time physical context (location, time, presence signal) satisfy the data-owner's policy. This means EDR cannot be circumvented by disabling it; access control is enforced by the data system itself.

Practical Restructuring: From Compliance to Architecture

Operationalising this requires three interlocking shifts:

First, treat the office—or the authorised work location—as the trusted execution environment, not the laptop. The laptop is a terminal, not a safe. All sensitive computation, decryption, and data access happens within a controlled perimeter: a physical office with badge access, a VPN terminating in a data-centre with strict geofence rules, or a hardware security module that requires physical presence. The laptop stores credentials in a useless state; they are only activated when the operator is within the trusted perimeter and has passed real-time presence verification. This means remote work is possible, but it requires a different threat model. Contractors and temporary staff work from a specific office on a specific day; that policy is cryptographically bound to their access token.

Second, integrate physical signals into the identity and authorisation layer, not just the perimeter. Most organisations use badge logs for compliance audits, not for real-time access control. The badge swipe should be a cryptographic event: the badge reader (connected to HSM or a trusted hardware module) generates a signed assertion that a physical presence occurred at a specific time and location. That assertion becomes part of the login token. The data plane accepts the token only if the assertion is recent and the location matches the data's access policy. This requires coordination between facilities, identity systems (Okta, Entra, Active Directory), and data systems—but it is technically straightforward.

Third, design for non-repudiation and continuous audit. Every data access should be bound to a real-time physical presence verification. If an employee accesses customer data, that access must be cryptographically linked to badge data, location data, and biometric data. If that combination is impossible to match (e.g., the employee was in a meeting when the access occurred), the system alerts. This is not surveillance for its own sake; it is a structural property of the architecture. An adversary who has compromised a credential cannot use that credential without also spoofing the physical signals—which are hardware-controlled and distributed.

The Immediate Landscape: Regulatory and Operational Pressure

The regulators are beginning to sense this gap. The FCA's operational resilience work (SYSC 17A) requires firms to identify critical business services and design for continuity; DORA's Annex I mandates scenario testing of scenarios including "sophisticated cyberattacks" and insider threats. The SEC's 4-day breach notification rule (effective 13 December 2024, 17 CFR 249.04) pressures organisations to detect breaches rapidly—but detection is only possible if the breach leaves a detectable signature. The NYDFS Part 500 (23 NYCRR 500) requires MFA and encryption, but does not yet mandate real-time presence verification. NIS2 compliance (mandatory by 17 October 2024 for large operators) requires supply-chain risk assessment and incident response; physical OPSEC integration would significantly raise the bar for adversaries.

Organisations in critical infrastructure (banking, energy, healthcare, telecommunications) and those holding customer data at scale face mounting pressure to demonstrate that their architecture is post-breach resistant—that even if an adversary has obtained a credential, they cannot exfiltrate data without triggering cryptographic rejection or detection. The compliance checkbox approach—encryption, MDM, badge logs—is demonstrably insufficient. The architectural approach—zero-knowledge substrate, continuous drift, physical-signal integration, domain-specific automation—is not yet standard, which means it is an asymmetric advantage for those who implement it first.

Conclusion: The Operational Border

The office, the badge, and the laptop are not separate security domains. They are threads in a single operational fabric. An organisation that treats them separately—physical security as facility management, endpoint security as IT, data security as engineering—has already lost the game. An adversary needs to compromise only one domain; they can then pivot to the others.

The alternative is to design from the assumption that no single domain can be trusted, and that access to sensitive data must require simultaneous proof across multiple domains. This is not a cultural shift or a training programme. It is an architectural redesign.

Qualified operators managing security programmes for organisations holding or transferring regulated data should request a technical briefing under executed Mutual NDA.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading