The Safety Casket Has Been Locked by Engineers Who Do Not Speak to Doctors

The healthcare industry treats medical device security as an information-technology compliance problem—segregate networks, patch firmware, deploy EDR—when the fatal structural flaw is that safety-critical device behaviour is still determined by a remote adversary's code path rather than by the physics, pharmacology, and clinical protocol that the device manufacturer encoded into its intended operation.

The distinction matters because it has killed patients. Not in theory. In documented fact.

The Industry Narrative: Compliance at the Perimeter

The standard reading, articulated across HIPAA risk assessments, FDA guidance documents, and vendor whitepapers, runs like this: medical devices are increasingly networked; networked devices are attacked; attackers use the same techniques (living-off-the-land command shells, credential theft via phishing, lateral movement via unpatched protocols) employed against enterprise infrastructure; therefore, medical device security is a subset of cybersecurity, solvable by extending hospital IT controls—network segmentation, vulnerability scanning, endpoint detection and response (EDR), and timely patching—into clinical environments.

The 2023 Synnovis ransomware attack on a London NHS pathology network serves as the industry's canonical example. On 3 June 2023, the LockBit gang encrypted Synnovis systems; patient test results vanished; emergency departments across South London were forced to defer routine procedures for weeks. The incident was investigated by the NCSC and the ICO; the narrative that emerged was infrastructural: weak segmentation allowed ransomware to propagate from administrative systems into clinical networks; outdated software (Windows 2008 R2, unpatched SQL Server) lacked current defences; backup strategy was insufficient. The remediation was bureaucratic: regulatory statements, procurement mandates for "secure by design" in future contracts, vulnerability management frameworks aligned to NIST CSF, and network microsegmentation.

Similarly, the 2024 Change Healthcare incident—where the BlackCat/ALPHV gang extorted UnitedHealth's subsidiary for USD 22 million after encrypting pharmacy, claims, and prior-authorisation systems—was framed as a lessons-on-network-hygiene story. The attack chained unpatched Citrix appliances and compromised credentials; the remediation pathway centred on asset discovery, vulnerability prioritisation under CVSS scoring, and incident response retooling.

The FDA's "Cybersecurity Vulnerabilities in Medical Devices" guidance (revised 2024) and the concurrent Executive Order on AI and cybersecurity both emphasise vendor accountability via secure development practices, post-market surveillance, and harmonisation with NIST Cybersecurity Framework. Regulators have begun requiring "software bill of materials" (SBOM), memory-safety audits, and threat-modelling during design review. The UK's Health and Social Care Act 2008 (Regulated Activities) Regulations 2014, amended under the Health and Care Bill to include cyber duties, similarly places the burden on "appropriate security measures"—interpreted by most NHS procurement as: encryption in transit, role-based access control, audit logging, and detection-and-response capabilities.

The assumption is transparent: if IT controls are sophisticated enough, the device will remain under operator control. The attacker will be detected, isolated, or delayed until a patch is applied. The safety boundary is the perimeter; the defences are electronic.

The Structural Failure: Safety and Control Have Been Decoupled

Here lies the architectural sin. A medical device—a ventilator, an infusion pump, a dialysis machine, a surgical robot—is not a workstation. Its security is not separable from its safety. And yet the standard remediation approach treats them as independent problems, solvable by different teams, using different threat models and different success metrics.

When a hospital installs an EDR agent on a workstation, the goal is detection: identify abnormal process behaviour, flag it, enable response. When the EDR agent's parent process is a ventilator's control firmware, the goal becomes impossible. You cannot "detect and remediate" unsafe inflation pressures in real time without risking patient death during the investigation. You cannot kill a malicious process without verifying that the device will revert to a safe state. You cannot apply a firmware patch to an infusion pump mid-infusion without risking the patient's medication schedule.

The Snowflake tenant isolation cascades of September 2024 illustrate this architectural ceiling. The attack exploited credential theft and weak secrets management in Snowflake's SaaS architecture; attackers extracted healthcare data from dozens of organisations simultaneously. The remediation—credential rotation, IP allowlisting, multi-factor authentication enforcement—addresses the control plane (who can log in and change configuration). It does nothing to protect the data plane (what those credentials can read once authenticated). If a healthcare organisation uses Snowflake to store real-time telemetry from connected devices—ventilator settings, cardiac rhythm data, antibiotic infusion rates—then Snowflake's credentials are device control. Breaching them is equivalent to breaching the device's own authentication system.

The M&S Scattered Spider incident (January 2025) was operationally similar: attackers compromised vendor accounts (in this case, third-party system administrators and logistics providers), then used those accounts to read and exfiltrate corporate data. For a healthcare supply-chain company—a vendor of infusion pumps, patient monitors, or surgical instruments—this same attack pattern would mean an adversary could alter device firmware, inject false calibration data, or trigger unsafe state transitions, all under the cover of legitimate vendor credentials.

The structural failure, then, is this: modern cybersecurity defends the interface (detection, access control, encryption). It does not defend the substrate—the computational and physical semantics that determine whether a device does what its designers intended, not what an attacker instructs it to do.

Why Detection-and-Response Cannot Solve This

Consider the clinical reality. A dialysis patient arrives for a routine 4-hour session; the nephrologist prescribes a specific ultrafiltration rate, sodium concentration, and heparin regimen. The dialysis machine's firmware has encoded these parameters; its control loop has been validated against 40 years of clinical evidence. An EDR system monitoring the machine's host processor can detect when a process attempts to modify the ultrafiltration rate outside the clinician's intended envelope—but only if the malware explicitly calls a function to do so. If the attacker compiles malicious code as part of the firmware binary itself, and the firmware is signed by a key the manufacturer's supply chain has compromised, then the EDR observes legitimate device behaviour.

Even if detection works, response is clinically catastrophic. The standard incident response playbook—isolate the device, kill the suspicious process, reboot—will alarm the patient, potentially interrupt treatment, and demand human intervention that may exceed the clinical team's technical competence.

The industry's answer is to make detection earlier: deeper packet inspection, firmware integrity monitoring, anomaly detection on device telemetry. But this is a doomed escalation. The adversary has moved from the data plane into the control plane: they own the manufacturer's build pipeline (as in the SolarWinds supply-chain attack of 2020, though that targeted IT infrastructure broadly), or they've compromised the firmware signing key, or they've paid an insider to inject backdoor code into a maintenance release. Detection can no longer separate the intended from the malicious because the malicious code is part of the intended binary, signed and authenticated by the manufacturer.

The PULSE Architectural Answer: Safety by Substrate Design

The PULSE doctrine begins here: security cannot be a layer atop a device whose safety-critical behaviour is determined remotely. Instead, the device's core computational substrate must be designed so that certain state transitions are impossible — not because software prevents them, but because the hardware, firmware, and physical parameters make them impossible.

This requires a separation that modern medical device architecture has abandoned: a strict partition between a control plane (which configures the device; which can be networked, patched, updated, and audited like any enterprise system) and a data plane (which executes the actual clinical protocol without deviation, without network access, and without the ability to be remotely altered).

In practical terms: a ventilator's control plane accepts configuration from the clinician's workstation—set the FiO₂, set the respiratory rate, set the pressure limit. This configuration data is communicated over a standard network, authenticated, encrypted, versioned, and logged. But once the control plane has committed a configuration to the data plane, the data plane operates under a hardened state machine, implemented in immutable firmware and validated hardware circuitry, that enforces:

This architecture means that even if an attacker compromises the control plane, patches the firmware, injects malicious code into the build pipeline, or steals the manufacturer's signing keys, the data plane continues to operate within the bounds the clinician originally specified. The attacker cannot alter the patient's respiratory rate, infusion rate, or dialysate composition without first gaining access to a physically isolated, hardware-protected component—at which point the security problem has been reduced from "detect an insider at Philips Healthcare" to "detect someone physically tampering with a device in a locked room", a problem for which clinical facilities already have controls.

Continuous Adversarial Posture: Active Hardening, Not Passive Patching

The second principle follows from the first: if the device's safety cannot be breached remotely, then security becomes adaptive rather than reactive. Instead of waiting for a CVE, the device's control-plane interface drifts continuously—certificates rotate, encryption parameters evolve, authentication protocols shift—according to a schedule that is asynchronous from the device's safety-critical operation.

The clinician's workstation continues to function; the patient receives uninterrupted therapy. But the device's network posture changes daily, driven by adversarial anticipation rather than incident response. This is domain-specific active defence: a surgeon does not patch their scalpel after an infection; they sterilise it before every use. A medical device's network persona should be similarly pre-emptive.

This requires that medical device manufacturers decouple their safety certification (which must be stable, reproducible, and auditable) from their security posture (which must be dynamic, adaptive, and threat-responsive). The data plane earns its certification once and is locked. The control plane's threat model is refreshed quarterly.

The Regulatory Path Forward

The FDA's "Predetermined Change Control Plan" (PCCP) framework, introduced in 2024, hints at this direction: manufacturers can commit in advance to certain modifications—firmware updates, configuration changes—without requiring pre-market review, provided those modifications remain within a pre-defined safety envelope. The logic is sound; the implementation is timid. A true PCCP would allow the control plane to evolve adversarially (new encryption, new authentication, new network micro-primitives) while the data plane remains static.

The NIS2 Directive's emphasis on "security by design" and the UK's forthcoming Health and Care Bill amendments create regulatory space for this shift—but only if manufacturers and hospital procurement teams stop conflating security with compliance theatre.

Invitation

Qualified healthcare security operators, clinical engineers, and medical device manufacturers working under executed mutual NDA may request a detailed technical briefing on substrate-driven safety architecture, control-plane adversarial posture management, and regulatory harmonisation pathways.

Engagement

Request a briefing under executed Mutual NDA.

PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.

Request Briefing →

Related Reading