Is this suspicious network activity alert actually a sign of intrusion, or just another false positive? As the cybersecurity visibility gap widens, anyone who works in a security operations centre (SOC) is likely to ask themselves and their colleagues this question on a regular basis.
Unfortunately, as analysts know, answering it is rarely straightforward. Most organisations remain caught up in data silos, receiving more data than ever but still lacking the tools to combine different sources of information.
Cybersecurity vendors haven’t helped much, either. Ironically, the security solutions sold to companies as a way to narrow the cybersecurity visibility gap can often inadvertently widen it. While something like a network detection and response (NDR) platform can help you see North-South traffic, this insight is of little value without a way to correlate it with what happens on applications, endpoints, servers etc.
The result can be more tools and data but less overall visibility. Proof of this is where SOC teams are spending their time – remediating false alerts. Today, SOC members spend about one-third (32%) of their day investigating incidents that are not a real threat to their organisation. Meanwhile, the number of cyber attacks happening globally is rising.
To overcome this problem, SOCs need to move away from analysing data sources in isolation. In other words, they need to plug their cybersecurity visibility gap. Here’s how, plus why modern security tools still fall short.
NDR, endpoint detection and response (EDR), Intrusion Detection Systems (IDS), and Intrusion Prevention Systems (IPS). Tools like these have always been at the heart of a typical SOC. But as modern attacks get more sophisticated and corporate environments more complex, their challenges are becoming more pronounced.
The core challenge is disparate data. The data modern security systems produce is spread across different products and threat feeds, meaning that analysts have to hop from one tool to the next and sift through multiple data sources – and correlate “individual piece” alerts – to figure out if their organisation is under attack.
Read more: The hidden cost of alert fatigue in cybersecurity
Nearly 1 in 2 security engineers and analysts say they have seen the number of alerts they receive triple, quadruple, or even quintuple over the last year. At least some alerts are being ignored while manual investigation processes delay the mean time to detect (MTTD) and the mean time to respond (MTTR).
In theory, a SIEM should help. One of its primary functions, after all, is to centralise data. But SIEMs cannot natively link activity on an endpoint to corresponding network activity.
A SIEM solution like Microsoft Sentinel will send you endless streams of alerts but will not show you what links to what, i.e., context. This means security analysts don’t have the information they need to understand attack chains and stop the advanced threats targeting their networks.
Learn more: How you can supercharge Microsoft Sentinel SIEM.
The standard SIEM engineering-based cure to this problem is often worse than the disease.
For example, to go beyond poorly fitting OOTB detections, a security team might build custom use cases and rules for their environment. This is an extremely costly process that depends on specialist staff and continuous testing. It also doubles down on the rule and signature-based methodology behind SIEMs – technology that advanced threats can easily counter.
You might also want to bring more logs into your SIEM through disparate solutions like NDR and EDR. But more data does not mean more insight. It does, however, mean more data which will have to be fine-tuned and continuously normalised – a fact which can spiral costs. Plus, the broader a data set gets, the higher the volume of false alerts becomes.
Ultimately, these limitations mean that SIEM engineering has run up against a wall. No amount of SIEM configuration will result in the correlated, contextual information SOCs need to stop modern threats reliably.
Data ingested into a SIEM cannot link activity at endpoints to network activity unless this link is captured at the source (as the communication happens).
This is the security data problem, and its results are getting harder to ignore. In one study of SOC teams:
The problems that SIEMs face also extend to “next-generation” solutions, such as extended detection and response (XDR) platforms, which require the configuring and adding of data sources and repeated tuning of alert rules.
Instead of cobbling together inconsistent log files, SenseOn uses a single piece of software to collect data from endpoints and networks in a single format.
With its Universal Sensor deployed across endpoints (whether on-premise, remote, or in the cloud), SenseOn can link network protocol information with the programs managing the communications and user identity they are running with in real time giving full visibility of not only WHAT is happening on the network but WHO is responsible and WHY.
Analyst teams need never again be swamped with “individual piece” alerts. Rather, they see what we call a “Case,” i.e., not just that something (like a connection to a C&C server) happened but also important context, like how did the attacker enter and become persistent within the network, how did they gain elevated privilege and where they have traversed to within the perimeter with all observations clearly linked back to the MITRE ATT&CK tactics and techniques that have been identified.
In this way, security teams can quickly determine why a breach happened, its severity, and the next steps they should take.
With over 600 out-of-the-box detections that do not require manual configuration and fine-tuning, SenseOn is easy to deploy and gives value from day one.
Try a demo of SenseOn today.