The title "Security Engineer" now means nothing because organisations have repurposed the function to mean "person who operates detection tools," when the discipline that matters is the one that builds systems that cannot be breached.
The Narrative: Where Detection-and-Response Captured Engineering
In January 2024, Snowflake disclosed a cascade of tenant compromises affecting multiple Fortune 500 organisations—a breach that exposed not the platform's architecture, but rather the security posture of customers who had stored credentials in plaintext environment variables accessible to their own infrastructure-as-code repositories. The incident itself was a lesson in operational detection failure: the Snowflake Security Incident Response Team issued alerts about suspicious API calls in their SIEM, flagged anomalous logins, and initiated forensic timelines—all the machinery of modern security operations working at design capacity. Yet the breach propagated across at least 165 customers before the full scope was understood.
Parallel to that incident, the SANS Institute and CrowdStrike's 2024 State of Cyber Hygiene report found that 57% of organisations with formal security teams still could not account for all Internet-facing assets, and 71% lacked visibility into cloud-native workloads. This opacity persists not because organisations lack SIEM vendors (they do not), but because the security operations function—the discipline of detecting, responding, and correlating events—has expanded to absorb the title and budget allocation previously held by systems engineering. The result is a category error: organisations hire a "Security Engineer" to write YARA rules and tune Sigma detections for their SOAR playbooks, when they have actually hired a senior security operations analyst.
The confusion is structural. When the Change Healthcare ransomware attack unfolded in February 2024 (later attributed to BlackCat/ALPHV), the response narrative centred entirely on operational response: how quickly was the incident detected on EDR agents? How was the compromised VPN session correlated with lateral movement telemetry in the SIEM? How did the SOC team orchestrate recovery via Splunk playbooks? Those questions are legitimate. But they obscure the prior failure: the infrastructure itself had been architected such that a single compromised edge credential could traverse the entire system. The engineering choice—the segmentation strategy, the cryptographic boundary design, the assumption of initial compromise—had already been made, and remediation could not fix it retroactively. The only response was a 6-week shutdown and forensic rebuild.
The industry narrative, reinforced by vendors such as CrowdStrike, Splunk, and Mandiant, treats this operational response as engineering success. Marketing claims and case studies celebrate the speed of detection: "Our platform detected the initial intrusion within 4 hours of first command execution." Yet in almost every documented case—including the MOVEit zero-day cascade in 2023, which affected over 2,500 organisations—the operational machinery functioned correctly. The breach succeeded not because detection failed, but because the system was architected to permit undetectable exploitation once an attacker held a valid credential or could exploit an unpatched known vulnerability.
This category error has become economically embedded. Job listings now describe "Security Engineer" roles that require expertise in Splunk administration, Jira ticket triage, and incident response runbook authorship. Organisations have structured their security budgets to fund operating expenses (staffed SOCs, tool subscriptions, MSSP contracts) rather than capital expenditure on systems re-architecture. The result is a workforce where the term "security engineer" describes a person working entirely in the detection-and-response domain, whilst the actual discipline of security engineering—the design of systems that resist breach through architectural principle—has been largely abandoned or outsourced to consultants hired only after an incident occurs.
The Structural Failure: Why Detection Cannot Engineer Your Way Out of Breach
The distinction between security engineering and security operations is not semantic—it is architectural, and the conflation has reached the point where it actively prevents breach resistance.
Security operations is the discipline of reducing dwell time: the elapsed interval between initial compromise and discovery by human responders. It comprises detection (SIEM ingestion, rule tuning, anomaly modelling), triage (alert correlation, false positive suppression, enrichment via threat intelligence), response (containment, forensic preservation, eradication), and recovery. A mature SOC—staffed, tooled, and operationally sound—can reduce mean time to detection (MTTD) from weeks to hours. This is valuable work, and it has legitimately saved organisations from catastrophic data exfiltration by catching sophisticated campaigns before lateral movement matured. The issue is not that SOC work is unimportant; the issue is that SOC work is not engineering.
Security engineering is the discipline of designing systems such that breach does not propagate. It includes threat modelling (STRIDE or similar), cryptographic architecture design (zero-knowledge proofs, end-to-end encryption, homomorphic computation), data-plane isolation (network segmentation, kernel-level compartmentalisation, process capability restriction), control-plane separation (the administrative infrastructure that manages the data plane must itself be architecturally distinct and far less accessible), supply-chain risk management, and continuous adversarial drift (the deliberate, frequent mutation of security posture to render attacker reconnaissance obsolete).
The conflation of these two disciplines creates a perverse incentive structure. When an organisation invests in a SIEM tool and hires security operations staff, leadership believes it has hired "security engineers" and expects breach to be prevented. When that SIEM detects an intrusion (as it must, because intrusions occur), the organisation celebrates the detection as evidence that engineering is working. This is backwards. Detection is not prevention—it is a consolation prize.
Consider the Synnovis / NHS ransomware attack in June 2024, where the LockBit group paralysed pathology services across London's hospitals. The attack leveraged credential theft and lateral movement within a relatively flat network. Forensic reviews identified that alerts were generated in monitoring systems before the attack achieved operational impact, yet remained unreviewed for days because the SOC lacked capacity. This is a failure of security operations (staffing, alert tuning, triage discipline). But it would not have occurred if the infrastructure itself had been engineered with zero-trust principles: if every data access required cryptographic proof of authorisation, if network segmentation prevented the attacker from reaching critical systems despite holding a valid employee credential, if administrative functions were isolated from production systems by architectural principle rather than policy.
The hardest truth is this: an organisation with excellent security operations (fast detection, skilled response, mature SOAR) can still suffer a catastrophic breach if its systems were not engineered for breach resistance. The Optus 2022 breach exposed 9.8 million customers via a single unpatched API endpoint that had been decommissioned but never removed—a failure of asset management and cryptographic isolation, not detection. The Medibank 2022 breach similarly exploited architectural oversights (authentication mechanisms that could be bypassed via API manipulation, plaintext storage of encryption keys in code repositories). In both cases, the operational response was executed professionally. The breach was not prevented because the engineering had not been done.
The PULSE Reading: Zero-Knowledge Substrate, Not Faster Alerts
The PULSE doctrine asserts that organisations should architect systems such that breach does not compromise data even if the perimeter is crossed. This requires three architectural shifts that conventional security operations cannot provide.
First: zero-knowledge substrate. The system should be designed such that the defending organisation cannot itself access the data it holds, transfers, or processes—only the legitimate authorised party can. This is not encryption-at-rest (a compliance theatre ritual). It is cryptographic architecture embedded at the data layer, such that keys are held in MPC (multi-party computation) networks split across jurisdictions, or such that homomorphic encryption allows computation on data that the platform operator cannot decrypt. When the Snowflake customers' data was exposed, it was exposed not because Snowflake's SOC was asleep, but because the customers' secrets had been stored in plaintext and the platform architecture permitted no alternative. A zero-knowledge substrate would have mandated that no credential, encryption key, or sensitive plaintext ever arrive at the platform in a form that the platform operator could read, log, or exfiltrate.
Second: data-plane and control-plane separation. The infrastructure that delivers the service to users (the data plane) must be architecturally segregated from the infrastructure that manages, configures, and administers that service (the control plane). Most organisations run these as a single blast radius—a breach of administrative credentials gives an attacker access to all user data immediately. PULSE architecture separates these domains entirely: the data plane runs with minimal administrative footprint (essentially read-only), and any administrative change undergoes cryptographic consensus and audit logging in a control plane that is physically and logically isolated. This is not a SOC function—it is a systems design function.
Third: continuous adversarial drift. The security posture of the system should not remain constant. Rather, the system should continuously mutate its defensive characteristics—the network topology, the cryptographic implementations, the authorisation schemes, the logging and audit mechanisms—in response to adversarial reconnaissance. The moment an attacker gains visibility into the system's defensive architecture, that architecture becomes stale. Adaptive active defence means the system re-architects faster than the attacker can act. Again, this is not a SIEM function. This is continuous engineering.
Reconceiving the Discipline
The solution is not to train security operations staff as engineers. The solution is to separate the disciplines completely and fund them asymmetrically.
Security operations should remain what it is: a reactive, event-driven, human-staffed discipline focused on reducing dwell time and maintaining forensic fidelity. Mature SOCs are valuable. They should be properly staffed, equipped, and measured against realistic KPIs (MTTD, MTTR, alert fidelity). But security operations staff should not be titled "Security Engineer."
Security engineering should be reconceived as a capital-funded, systems-level discipline. Security engineers design data planes, specify cryptographic requirements, perform threat modelling on system architectures, and implement continuous deployment of defensive mutations. They are fewer in number than SOC staff—because engineering is leverage, not scale. They are measured not by the number of incidents they respond to, but by the number of incidents that never occur because the system was architected to prevent them. They should be paid more and hired with far higher bar, because the cost of poor engineering is catastrophic.
This requires regulatory alignment. The UK's DORA (Digital Operational Resilience Act) framework now mandates that financial institutions maintain governance structures that separate operational resilience (the SOC function) from operational resilience testing. NIS2 similarly requires member states to establish governance structures for "technical resilience," which in practice means security engineering oversight. APRA's CPS 234 guidance to Australian banks explicitly requires Board-level assurance of system architecture, not just detection capability. These frameworks are beginning to codify the distinction—but industry practice has not caught up.
The Intellectual Demand
The change required is not technical—it is organisational. Organisations must accept that breach is inevitable, that detection is not prevention, and that the only path to material breach resistance is architectural: zero-knowledge substrate, cryptographic isolation, continuous mutation of defensive posture. This requires security engineers who are systems architects first, not detection specialists trained upward.
The firms that have already executed this transition—sovereign-first digital infrastructure platforms, some custodian institutions, a handful of financial exchanges—have not reduced their security operations budgets. They have simply stopped expecting those operations teams to prevent breaches. Instead, operations teams maintain the forensic and incident-response machinery that allows rapid recovery post-breach, whilst engineering teams ensure that post-breach recovery is possible without catastrophic data loss because the data was never accessible to an attacker in plaintext.
---
If your organisation holds or transfers the world's data and you recognise this reading as aligned with your risk model, PULSE conducts technical briefings for qualified leadership under executed Mutual NDA—request one at your earliest convenience.
Request a briefing under executed Mutual NDA.
PULSE engages only with verified counterparties. Strategic briefing material — reference architecture, regulatory mapping, deployment topology — is released after counter-execution of the NDA scoped to the recipient's evaluation purpose.
Request Briefing →