Detection engineering has matured from ad-hoc rule writing to a structured discipline. The best security teams treat detections as code: version-controlled, tested, and continuously improved. Here are five principles we follow at SenseOn.
1. Detect behaviours, not tools
Attackers change their tools constantly. The PowerShell script used in today's campaign will be different tomorrow. But the underlying behaviours, such as credential dumping, lateral movement via remote services, and data staging before exfiltration, remain remarkably consistent.
Write detections that target MITRE ATT&CK techniques rather than specific tool signatures. A detection for "unusual process accessing LSASS memory" catches credential dumping regardless of whether the attacker uses Mimikatz, a custom tool, or a living-off-the-land binary.
2. Test every detection before deployment
A detection that has never been tested against realistic telemetry is a liability. It might not fire when it should. It might fire constantly on benign activity. Either outcome erodes trust.
Build a testing pipeline. Use attack simulation tools to generate realistic telemetry. Verify that your detections fire on malicious activity and stay quiet on normal operations. Run this pipeline on every change.
3. Measure coverage, not volume
Counting the number of detection rules tells you nothing about your security posture. Coverage measurement requires mapping your detections to a framework like MITRE ATT&CK and identifying which techniques you can detect, which you cannot, and where your visibility has gaps.
This mapping reveals blind spots. Perhaps you have strong endpoint detection but limited visibility into cloud API abuse. That gap becomes a prioritised item on your engineering roadmap.
4. Manage the alert lifecycle
Every detection should have an owner, a documented response procedure, and a defined lifecycle. When should it be tuned? When should it be retired? What false positive patterns are known?
Detections without owners become technical debt. They generate alerts that nobody understands, eroding analyst trust in the entire detection programme.
5. Automate what you can, but keep humans in the loop
Automation excels at enrichment and triage: adding context to alerts, scoring severity, and grouping related events. It also works well for containment of high-confidence detections, like isolating an endpoint exhibiting known ransomware behaviour.
But the decision to escalate an incident, assess business impact, and coordinate response still requires human judgement. Build automation that amplifies analyst capability rather than replacing it.
Building a detection programme
These principles apply whether you are building detections in-house or evaluating a platform. At SenseOn, our detection engine is built on behavioural analysis across multiple telemetry sources, with every detection mapped to ATT&CK and tested against real-world attack simulations.