Hostile by definition: Thoughts on Zero Trust Security – and its pitfalls. Part 1
(Note: see part two of our Zero Trust blog posts )
It’s a common question after a major breach: did you do everything you could have to protect your network? Most of the time the answer is…probably not. Often, we live in a false sense of security. We know it, and most of us are OK with it. But let’s talk about what’s practical and what steps can be taken to help you get to a better sense of security.
Before that, let’s look at some examples of what I mean by a “false sense of security.” We drive a box made of “strong” material at 80+ miles per hour and we aren’t scared because we have airbags to protect us. We skydive, stepping out of a plane with a piece of fabric called a parachute. We drink, we live in skyscrapers, we use a single password for too many logins, and so on. You get the point. The reason for us accepting these things is risk management. We do the math and decide if it‘s worth it or not. Simple, right? With relatively low effort, it can indeed be simple to reduce the risk levels and dramatically increase the security posture of an enterprise as well. Because we use industry tools to secure our digital presence within our organization, we can sometimes get past this premise. It can be an appealing proposition: better security with minimal effort, easy!
For the last five years or so I have had a genuine interest in a philosophy that has been out there since 2010 and is still maturing – Zero Trust Network (ZTN), a model created in 2010 by former Forrester analyst John Kindervag (now at Palo Alto Networks). In this blog, we’ll explore the goals of Zero Trust and some of the challenges it poses.
This framework is considered a shake-up in how the industry perceives network security since with Zero Trust we’re supposed to assume the inside network is hostile by definition. So all users – regardless if they are local inside the network and past the perimeter – are equally required to prove their identity when they access any corporate resource. And they need to do that over and over again and again with every access. This way, if service or user was compromised, the threat is compartmentalized within that service. To make it even simpler to understand: I think of this model as making lateral movement very noisy to the level it will be hard to ignore or go unnoticed. By design, this reduces the possibility of lateral movement.
In practice, implementing a Zero Trust approach in your network can require a re-architecting of your entire enterprise environment. Some Zero Trust models aim to eliminate the need for remote access VPN and instead make all internal application access cloud-like. Let’s peel off the layers of Zero Trust and simplify it a bit more just to get a common ground for discussion.
From the 30k feet view:
It looks like this: there are no more firewalls and trust between users and services needs to be established within a point in time based on multiple parameters.
From 20K feet:
In practice, there’s actually some trust within the network – often a proxy somewhere that is trusted by service providers, typically web services or IDPs.
Below 20K feet
To truly understand Zero Trust at a granular level, we must understand the challenges enterprises face with implementing a Zero Trust framework. Here are a couple of examples:
- Legacy apps, legacy network resources, administrative tools, and protocols are part of the network and enterprise operations. For example, Mainframe, HR Systems, Powershell, PSexec, and more are commonly excluded from the Zero Trust architecture. However, they are essential tools for the operations just like protocols such as NTLM that need to go away years ago but are there to stay. Traditionally, all of these can’t be protected with identity verification, posing a cost-prohibitive obstacle (it’s often too expensive to re-architect these systems). Many times these legacy systems are excluded from the approach, which makes them the weakest link. In other cases, security teams create an inconsistent user experience, or when possible (e.g. PSexec), prohibit tools from being used, which reduces staff productivity.
- Regulations have not yet adopted the Zero Trust model, which means the organizations under compliance will have trouble passing an audit. If PCI-DSS requires the use of Firewalls and Segmentation of sensitive data how do you pass audits if there are no firewalls? Will such a move put the whole environment under the regulation? What are the implications of regulations are about segmentation and Zero Trust is not? Regulations will need to change before we can completely use this model in a robust way.
- Visibility and Control within the network are often one of the major factors challenging enterprises’ implementation of Zero Trust networks. Most organizations don’t have a comprehensive view into – or ability to set protocols around – all individual users within their network and are thus vulnerable to threats posed by unpatched devices, legacy systems, and over-privileged users.
While there are more examples, these topline points highlight the fact that we are a long way away until organizations will become 100 percent Zero Trust compliant: for now, this would require major surgery on an organization’s IT infrastructure. In the near term, a hybrid approach to Zero Trust will likely be the status quo.
In our next blog, we’ll explore how you can use an Identity and Access Threat Prevention approach to implement a compliant, effective Zero Trust model that doesn’t require a cost-prohibitive – and potentially ineffective – redesign of your entire enterprise environment.
Posted by Eran Cohen on October 4, 2018 12:03 PM