Independent validation matters in cybersecurity. Vendor claims about detection rates and accuracy are ubiquitous, but without rigorous third-party testing, organisations have no reliable way to compare products or verify that a solution delivers what it promises.
That is why we submitted SenseOn to AV-Comparatives' Real-World Protection Test, one of the most respected independent security product evaluations in the industry. The results speak for themselves: SenseOn achieved a 99% protection rate with zero false positives.
This article explains what these results mean, how AV-Comparatives conducts its testing, why zero false positives is operationally significant, and how SenseOn's cross-domain correlation architecture makes these results possible.
What Is AV-Comparatives?
AV-Comparatives is an independent Austrian organisation that has been testing security products since 1999. Their testing methodologies are widely regarded as among the most rigorous and transparent in the industry. Unlike vendor-sponsored benchmarks, AV-Comparatives tests are conducted independently, with standardised methodologies that ensure fair comparison across products.
The organisation tests products across multiple dimensions: real-world protection, malware detection rates, performance impact, false positive rates, and advanced threat protection. Their results are used by enterprises, government agencies, and industry analysts worldwide to inform purchasing decisions.
The Real-World Protection Test Methodology
The Real-World Protection Test is specifically designed to simulate the conditions that security products face in actual enterprise environments. Rather than simply scanning static malware samples, the test exposes products to live threats using real-world attack vectors.
The testing process works as follows:
Threat sourcing: AV-Comparatives collects live malicious URLs, phishing sites, drive-by downloads, and other web-borne threats that are actively being used in the wild. These are not curated malware samples from repositories; they are the actual threats that organisations face daily.
Test execution: Each security product is deployed on an identical test system. The test systems access the malicious URLs and content under controlled conditions, and the product's ability to block, detect, or remediate the threat is recorded at each stage of the attack chain.
False positive testing: Crucially, the test also measures false positives by exposing products to a large corpus of legitimate software and websites. Products that block benign content are penalised, reflecting the real-world cost of false alarms.
Scoring: Products receive a protection score based on the percentage of threats successfully blocked, adjusted for false positive rates. Products with high detection but also high false positives do not receive top ratings.
SenseOn's Results: 99% Protection, Zero False Positives
99% Protection Rate
SenseOn successfully blocked 99% of the real-world threats presented during testing. This places SenseOn among the highest-performing products in the evaluation, demonstrating that our detection capabilities are not merely theoretical but proven against live, in-the-wild threats.
The 99% protection rate reflects SenseOn's multi-layered detection approach. Threats were blocked at various stages of the attack chain: some at the network level before they reached the endpoint, others through behavioural analysis during execution, and others through post-execution detection and remediation.
Zero False Positives
Whilst a 99% detection rate is impressive, the zero false positives result is arguably more operationally significant. Here is why.
False positives are the hidden tax of cybersecurity. Every false alarm consumes analyst time, erodes trust in the security tooling, and can disrupt legitimate business operations. In large enterprises, security tools that generate even a modest false positive rate can produce hundreds or thousands of spurious alerts daily.
The operational impact of false positives is severe:
- Analyst fatigue: SOC analysts who repeatedly investigate false alarms become desensitised to alerts, increasing the risk that genuine threats are overlooked or deprioritised.
- Business disruption: Automated blocking of legitimate software or websites interrupts employee productivity and generates support tickets.
- Trust erosion: When security tools cry wolf repeatedly, business stakeholders push back against security policies, and end users develop workarounds that actually increase risk.
- Resource waste: Every false positive investigation has a direct cost in analyst time. At industry-average investigation times of 15-30 minutes per alert, even 10 false positives per day consumes 2.5 to 5 hours of skilled analyst time.
Achieving zero false positives in the AV-Comparatives test demonstrates that SenseOn can deliver high detection rates without imposing these operational costs on security teams.
How the Industry Compares
To contextualise SenseOn's results, it is worth examining industry averages. In typical AV-Comparatives testing cycles, the average false positive count across all tested products ranges from 5 to 15 per test cycle. Some products that achieve very high detection rates do so at the cost of elevated false positive rates, a trade-off that looks good in detection benchmarks but creates significant operational burden.
The ideal position is the upper-left quadrant: high detection with low false positives. SenseOn's results place it firmly in this category, demonstrating that detection accuracy and operational efficiency are not mutually exclusive.
Products that rely heavily on aggressive heuristics or overly broad behavioural rules often achieve high detection rates but at the cost of excessive false positives. Conversely, products that are tuned conservatively to avoid false positives may miss genuine threats. SenseOn's cross-domain correlation architecture is specifically designed to resolve this tension.
Cross-Domain Correlation: Why These Results Are Possible
SenseOn's results are not the product of a single detection methodology pushed to its limits. They are the product of three independent AI methodologies working in concert, what we call cross-domain correlation.
Supervised Learning
SenseOn's supervised learning models are trained on vast datasets of labelled malware samples and benign software. These models excel at identifying known threat families, variants of existing malware, and attack patterns that match established signatures. In the AV-Comparatives test, supervised learning provided the first line of defence against threats with known characteristics.
Unsupervised Learning
Unsupervised learning models identify anomalies by building statistical baselines of normal behaviour. They detect threats that supervised models might miss: novel malware, zero-day exploits, and attacks that deliberately avoid matching known signatures. Importantly for false positive performance, unsupervised models distinguish between unusual-but-benign behaviour and genuinely malicious anomalies.
Deep Learning
The deep learning layer processes raw telemetry data to identify complex, multi-stage attack patterns. Rather than evaluating individual indicators in isolation, deep learning models assess sequences of events and their relationships. This contextual analysis is particularly effective at reducing false positives because it evaluates whether a complete attack narrative is present, rather than triggering on isolated suspicious indicators.
Cross-Validation
The critical innovation of the cross-domain correlation engine is cross-validation. When any single methodology flags a potential threat, the other two independently assess the same data. Genuine threats typically trigger signals across multiple methodologies, whilst false positives, activities that appear suspicious from one analytical perspective but are benign when viewed holistically, are filtered out.
This cross-validation architecture is what enables SenseOn to achieve simultaneously high detection rates and zero false positives. Each methodology compensates for the blind spots of the others, and the consensus requirement ensures that only genuine threats are escalated.
What These Results Mean for Organisations
For security teams evaluating detection platforms, the AV-Comparatives results provide independent evidence to support several practical conclusions:
Detection efficacy is proven, not promised: SenseOn's 99% protection rate is verified against live threats under controlled conditions, not self-reported against curated test sets.
Operational efficiency is built in: Zero false positives means that when SenseOn raises an alert, it warrants investigation. Security teams can trust the platform's judgement and allocate their limited resources to genuine threats.
The cross-domain correlation engine works: The architectural approach of cross-validating across three independent methodologies delivers measurable results in independent testing, validating the design principles behind SenseOn's platform.
Consolidation does not mean compromise: Organisations considering replacing multiple point solutions (EDR, NDR, SIEM) with a unified platform can be confident that consolidation does not come at the cost of detection quality.
Looking Forward
Independent testing is not a one-time event. We are committed to ongoing participation in AV-Comparatives and other independent evaluations because we believe that transparency and accountability are fundamental to building trust with security teams.
The threat landscape evolves continuously, and detection platforms must evolve with it. Our participation in independent testing programmes ensures that SenseOn's detection capabilities are regularly validated against the latest real-world threats, and that our customers can be confident in the platform's ongoing effectiveness.
We encourage organisations evaluating security platforms to look beyond vendor marketing claims and demand independent validation. The AV-Comparatives results demonstrate that it is possible to achieve elite detection rates without sacrificing the operational efficiency that security teams desperately need.