How Security Operations Can Safely Stop Investigating Benign True Positives
True Positives. It’s a topic of great interest to me. Security Operations can spend a lot of time dealing with separating out the truly non-malicious events. There is an easier way. But, before we go further, let’s align and calibrate on the terminology of True/False Positives/Negatives. Some of these terms have varying levels of agreement. It reminds me of VLAN– you can have 5 people in the room and there will be 6 different definitions for it. To make sure we are on the same page, let’s start with basic definitions accompanied by real-life examples.
True Positive (TP) is an event correctly identified. A true positive can be malicious or benign (i.e. non-malicious). There is a fine line between this and a false positive. An example of this might be when a non-malicious activity is classified as malicious — like when an Antivirus identifies a file as infected and won’t let you open it because of a specific capability in it that can be potentially used against you. It is a legitimate alert and one that we would want to be warned about, but it is not what you were looking for. That is, it is not a virus and it really is about the context of this use. A network scanning app could fall under this category. Obviously, if your intention is to scan this, it isn’t a risk, but if a hacker uses it, then it’s a risk. True positives require examination. It’s not good practice to ignore them.
False positive (FP) is an event incorrectly identified. It’s a non-malicious activity identified as dangerous. For example, a completely innocent executable file identified as unsafe by the Antivirus and preventing you from accessing it. This commonly happens with a heuristic-based approach of AV. They measure the compression ratio applied in the executable, however, this is commonly used to make the software run directly and independently without the installation process.
True negative (TN) is an event that is correctly rejected. It’s a non-malicious activity that is being correctly classified as such. For example, a non-malicious file successfully ignored and classified as non-malicious by AV solution.
False negative (FN) is an incorrectly rejected event. This is the most dangerous state. It is when a bad activity isn’t recognized as such. For example, a file containing a virus is not detected by the endpoint solution and it is classified as normal. By all means, this is a seriously dangerous situation.
I believe we would agree that False Negatives are a complete product failure which is most concerning, alarming, and worth discussing. I also believe that False Positives is a related “family” member that masks the importance of True Positives discussion.
So back to the original topic of True Positives. They can be sophisticated yet confusing. And at times we can add uncertainty because they can be slightly dependent on the solution doing the detection. For some, grasping the concept of True Positives may take a while, but essentially it is simple. True Positive is an event that should have happened. But does this make the event of interest? Is it bad or wrong? In most cases, it is neither of these. But organizations have to verify if this is good or bad. It’s a tedious task, one might refer to it as “separating the wheat from the chaff.”
In the definitions used above, I used examples from the Antivirus world because these are common scenarios that many have experienced. Let’s explore some examples from a User and Entity Behavior Analytics (UEBA) perspective.
UEBA solutions are commonly based on heuristic machine learning and statistical models which build a profile for network entities and then alert on deviations from this baseline. For example, Mellany, an accountant from the finance department at an automobile factory, accesses a file service she is allowed to access but has never used it before. Now, let’s add more data to this example: 30% of her peers (users that are similar to her by multiple vectors) use this network share on a regular base. This is a common scenario that may be detected with behavioral solutions. And there are many reasons why this new behavior could happen. Perhaps Mellany has a change in responsibilities and needs access. It could be that a third party took over her account. Or maybe it’s the malware trying to assess the network share and access levels.
So, let’s think about this.
Q: Is this access considered anomalous?
A: Yes. She didn’t use this network before, even if she has rights it may be because she has excessive rights.
Q: Is it wrong? Is it bad?
A: Probably not.
Q: Is this activity Interesting?
A: This depends on the context of the activity and I would argue that only in certain specific situations might be interesting and it’s certainly isn’t interesting in any other scenario mostly if it is one-time access. However, this isn’t an event to ignore.
True Positive events lead to extreme load on the security operations that need to check these types of activity. Preempt has conducted research based on real customer data, and found that in a network of 1000 users, these types of incidents occur 2-6 times per day. Multiply this out for larger organizations and you got a real overhead to address there. This is the main reason why incidents are being ignored, this is the real risk behind True Positives.
A recent survey found that more than half of security pros ignore important alerts. This is aligned with a survey Dimensional Research and Preempt conducted recently, where 64% responded that security teams are overworked, even if they have skills.
Remember, this isn’t a False Positive, it is True Positive. Its an event correctly identified. However, like many True Positives, you don’t want to examine every piece of the information nor do you have the resources. Most of the True Positives are simply harmful.
What are your options? Here are a few:
- In an ideal world, you would have someone to investigate, acknowledge, log and resolve each incident
- Build (or buy) a smart system that takes into account multiple dimensions so True Positives will not happen.
- Use the power of masses. Empower end-users to verify their identity allowing them to self-approve their access. This approach leaves the security team with the need to only investigate the incidents they care about.
With Preempt you can engage end-users in the process with almost zero levels of effort. Besides the positive effect described above, it has a very important sub effect that is ingesting the end-user input back to the behavioral system making the models supervised and even more accurate. This is a unique capability in Preempt’s Lite
P.S. A common question that comes as a reaction to option number 3 above: What happens with rogue insiders that may take advantage of it and approve all their actions regardless? Rest assured that they are constantly being monitored and followed. This may be a topic for future blogs. Stay tuned.
Posted by Eran Cohen on April 20, 2017 8:45 AM