SE Labs
Simon Edwards
Director
WEBSITE www.SELabs.uk
TWITTER @SELabsUK
EMAIL [email protected]
FACEBOOK www.facebook.com/selabsuk
BLOG blog.selabs.uk
PHONE 0203 875 5000
Introduction
Executive Summary
A common criticism of computer security products is that they
can only protect against known threats. When new attacks are
detected and analysed security companies produce updates
based on this new knowledge, which can then be applied to
endpoint, network and cloud security software and services.
The product is scored according to how far
into the future its protection is seen to reach.
For example, if it protected against a threat
that was created one year after the product
was built, then it would have a predictive
advantage of 12 months.
But in the time between detection of the attack and application
of the corresponding updates, systems are vulnerable to
compromise. Almost by definition at least one victim, the so-called
‘patient zero’, has to experience the threat before new protection
systems can be deployed. While the rest of us benefit from patient
zero’s misfortune, patient zero has potentially suffered
catastrophic damage to its operations.
POST ONE Croydon, London, CR0 0XT
MANAGEMENT
Operations Director Marc Briggs
Office Manager Magdalena Jurenko
Technical Lead Stefan Dumitrascu
TESTING TEAM
Thomas Bean
Dimitar Dobrev
MINORITY REPORT
Security companies have, for some years, developed advanced
detection systems, often labelled as using ‘AI’, ‘machine learning’
or some other technical-sounding term. The basic idea is that past
threats are analysed in deep ways to identify what future threats
might look like. Ideally the result will be a product that can detect
potentially bad files or behaviour before the attack is successful.
Malware campaigns can run over a period of
time, with those in control making changes to
the malware to add features or evade detection.
For this reason we used different variants for
each ‘family’ of attack. For example, we used
five different versions of the Cerber ransomware
attack, with samples dating from December
2016 through to February 2018.
CylancePROTECT’s Predictive Advantage (PA)
varied, depending on the threat. It ranged from
11 months up to 33 months, with an average PA
of 25 months. In other words, in some cases
it was able to recognise and protect against
threats that would not appear in real life for up
to two years and nine months into the future.
Generally speaking it was effective, without
updates, against threats just over two years
into the future.
While it is good practice to keep security
products fully updated, in many cases keeping
endpoint security products continuously up to
date is challenging. The purpose of this test
is to examine how effective past AI models
could be against newer threats. For this reason
a version of CylancePROTECT from early 2015
was used against threats from 2016, 2017
and 2018.
Liam Fisher
Gia Gorbold
It is possible to test claims of this type of predictive capability by
taking an old version of a product, denying it the ability to update
or query cloud services, and then exposing it to threats that were
created, detected and analysed months or even years after its
own creation. It’s the equivalent of sending an old product forward
in time and seeing how well it works with future threats.
Pooja Jain
Ivan Merazchiev
Jon Thompson
Jake Warren
Stephen Withey
IT SUPPORT
Danny King-Smith
Chris Short
This is exactly what we did in this test. Using CylancePROTECT’s
AI model from May 2015 we collected serious threats dating from
February 2016 all the way through to November 2017.
PUBLICATION
Steve Haines
Colin Mackleworth
SE Labs is BS EN ISO 9001 : 2015 certified for
The Provision of IT Security Product Testing.
SE Labs Ltd is a member of the Anti-Malware
Testing Standards Organization (AMTSO)
Such threats included WannaCry, a mid-2017 ransomware-based
attack that was spread using the NSA’s EternalBlue exploit; Petya,
a ransomware attack from early 2016; and GhostAdmin, malware
from 2017 capable of taking remote control of victim systems and
exfiltrating data.
These results demonstrate that CylancePROTECT users would
have been safe from the zero-day attack types used in the test
even if they had not updated their software for two years and
nine months.
04
MARCH 2018
• Predictive Malware Response Test
Predictive Malware Response Test • MARCH 2018
05
SE Labs
1. Predictive Advantage by Threat Family
2. Predictive Advantage by Individual Campaign
Predictive Advantage (PA) is the time difference
between the creation of the model and the first
time a threat is seen by victims and security
companies protecting those victims.
We exposed the model to a range of threats.
These comprised nine different ‘families’ that
featured in well-publicised campaigns. Each family
set contains five variants as found in the wild.
The model represented in this test was created in
May 2015. This is the same model as that deployed
in the real world with CylancePROTECT’s agent,
version 1300.
The graph below shows the average PA value for
each threat family. The higher the number, the
greater the distance in time from the model’s
creation date to the first known detection of that
specific set of files. Higher PA values are more
impressive, as they show the model’s ability to
predict threats further into the future.
Malware campaigns can run over a period of time,
with those in control making changes to the
malware to add features or evade detection.
For this reason we used different variants for each
‘family’ of attack. Variants within one family group
may appear in the real world at different times as
a campaign develops. For example, the GoldenEye
The graphs below shows the different PA values
for the individual threats, which are grouped into
their own families.
CAMPAIGN: Bad Rabbit
Threat Variant
Bad Rabbit
Predictive Advantage
(months)
Bad Rabbit1
29
Predictive Advantage by Threat Family
Bad Rabbit2
29
Threat Family
Predictive Advantage (months)
Bad Rabbit3
29
Bad Rabbit
29
Bad Rabbit4
29
Cerber
30
Bad Rabbit5
29
GhostAdmin
24
GoldenEye
23
Locky
20
CAMPAIGN: Cerber
NotPetya
25
Threat Variant
Petya
26
Reyptson
27
WannaCry
24
Predictive Advantage (months)
samples range from December 2016 through
May 2017 until July 2017.
29
14.5
0
Bad Rabbit1
Bad Rabbit2
Bad Rabbit3
Bad Rabbit4
Bad Rabbit5
Cerber4
Cerber5
GhostAdmin4
GhostAdmin5
Cerber
Predictive Advantage
(months)
Cerber1
33
Cerber2
32
Cerber3
19
Cerber4
32
Cerber5
32
33
16.5
0
Cerber1
Cerber2
Cerber3
30
CAMPAIGN: GhostAdmin
Threat Variant
15
0
Bad Rabbit
06
MARCH 2018
Cerber
GhostAdmin
GoldenEye
• Predictive Malware Response Test
Locky
NotPetya
Petya
Reyptson
WannaCry
GhostAdmin
Predictive Advantage
(months)
GhostAdmin1
23
GhostAdmin2
20
GhostAdmin3
20
GhostAdmin4
26
GhostAdmin5
33
33
16.5
0
GhostAdm
Please complete the form to gain access to this content