Almost a decade since “Zero Trust” emerged as an approach to network security, the buzz around it is stronger than it has ever been. Recent IDG research identifies Zero Trust as one of the top two researched security approaches and one of the areas with the greatest interest. Zero Trust rejects the outdated idea that everything inside the internal network is safe, while everything outside it is unsafe. It assumes instead that nothing is inherently safe, raising the possibility that internal versus external may not even represent a useful way to look at network security.
The recent rise of Zero Trust suggests the time has come to completely rethink how we define trust in considering how to secure critical data and resources. But why is Zero Trust in particular gaining traction now? And is it really the best way to ensure effective network security today? To answer these questions, let’s take a closer look at the thinking behind the concept and examine whether it makes sense as a way to manage cybersecurity risk in the era of digital transformation.
How We Got Here: A Short History of the Thinking Behind Zero Trust
It wasn’t so long ago that the standard approach to network security assumed a network perimeter was considered safe, while everything outside was deemed unsafe. Securing the network was, therefore, a relatively simple matter of blocking external traffic. At some point, though, the need to support remote access made that approach untenable. Firewalls evolved to support rules that punched holes in the network to let in at least some external traffic, and organizations invested heavily in virtual private networks (VPNs) as well as Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) solutions for protection. This progression reflects the tendency of security professionals to layer solution on top of solution to avoid being exposed to new vulnerabilities. That is how organizations have historically addressed the threat of an expanding attack surface, in an effort to embrace change while enhancing security.
The process of digital transformation—including the journey to the cloud (in the context of both infrastructure and SaaS), the rapid rise of mobility and the growth of IoT—has taken network security to a tipping point. As security professionals, we’re forced to acknowledge that adding more and more policy rules to classic network security stops being effective. As Doug Barth and Evan Gilman asked in Zero Trust Networks, “Have you found the overhead of centralized firewalls to be restrictive? Perhaps you’ve even found their operation to be ineffective? Have you struggled with VPN headaches, TSL configuration across a myriad of applications and languages, or compliance and auditing hardships?”
Zero trust reflects an approach to security that no longer assumes the old rules still apply. It’s hard to accept “external bad, internal good” when malware, insider threats and privilege misuse are becoming more impactful, with almost 20 percent of security incidents and 20 percent of confirmed breaches coming from inside the network. Furthermore, applications and data now reside in multiple locations, out of the organization’s control, and users are on the go, looking for a seamless access experience regardless of what devices they are using, their location or the resource they are accessing. In a time of digital transformation, a protected-perimeter strategy doesn’t work, because it’s hard to know where the physical perimeter is—or whether it even exists. This is why securing a software-defined perimeter (SDP) may be more realistic than securing a physical network perimeter. That means a shift from trusting IP addresses to creating trust of users, devices, resources (applications and data), and activities.
Clearly, We Need to Rethink Network Security. But What Is the Right Approach?
The Zero Trust concept goes beyond “trust, but verify” to command that we “never trust, always verify.” In reality, though, to actually enable business we need to continually establish the right level of trust within the context of any given interaction. We have to be able to ensure that an entity attempting access to resources is what it claims to be and that it will behave in expected ways for the duration of the interaction.
Zero Trust addresses the need to establish trust in microsegments. But what is the appropriate level of trust that needs to be established? In other words, what is the level of risk we are willing to accept to allow business to happen, while still mitigating security risks? The answer: It depends. Specifically, it depends on the impact of rogue access to a resource, and there are a number of possibilities to consider in that regard—the impact of a breach of personal data, a compromise of intellectual property or reputational damage, to name just a few examples.
Instead of Zero Trust, we need to establish the right level of trust, both for a user to perform the activity they want and for an entity (whether that’s a user, a machine or something else) to access the resource it wants to access. And we need to grant access with the least level of privilege required to perform the activity. Once trust is established and access is granted, our work is not done. Trust is dynamic. It changes over time as the access environment and context change. Trust and access must be adapted accordingly, based on changing levels of risk.
Gartner calls this CARTA, or Continuous Adaptive Risk and Trust Assessment: “Customer interest in and vendor marketing of a ‘zero trust’ approach to networking are growing. It starts with an initial security posture of default deny. But, for business to occur, security and risk management leaders must establish and continuously assess trust using Gartner’s CARTA approach.
As the perimeter shifts from the network layer to the SDP, where users can seamlessly access resources from anywhere, using any device, whether on-premises or in the cloud, a holistic approach to managing digital risk needs to be established above the network layer. This approach needs to address the risk question considering the SDP components: user, device, resource and activity. We need to establish trust with regard to these components relative to the level of risk associated with the access request.
Three Steps to Establish Trust and Manage Digital Risk: Visibility, Insight and Action
Establishing the appropriate level of trust to manage the risk associated with requests for access requires assessing the risk and then taking appropriate action. The action can be elevating the level of trust, blocking access or simply granting seamless access. The risk analysis requires visibility into what’s happening in the system, insight into the data and action based on that insight.
Visibility comes with the ability to collect data on users (identities), devices, resources and activities (workloads). Having unified visibility across all access use cases and user activities, as well as information about the network and endpoints, creates a rich view of the access environment. It also enables security and business teams to create accurate authorization/authentication policies in one place (rather than having to manage multiple firewalls, VPNs, proxies and other systems) and support compliance/auditing requirements. Next, this data needs to be analyzed to create insight.
Insight is making sense of the collected data. This is the risk assessment stage. Combining the following elements will provide holistic visibility, drive insight and facilitate the proper action required for trust elevation:
Business context: What is the impact of rogue access to an application or asset on the business? How critical is this asset?
Identity insights: How much confidence is there that the user is who they claim to be at the time of access and after access is granted? What does the organization know about the user, their device, what they should access and how they typically behave?
Threat insights: These consist of classical threat intelligence (such as the IPs communicating with the system), along with insights driven from a UEBA/SIEM/DLP system. They make it possible to understand whether the components of the interaction—device, network, supporting infrastructure—can be trusted and whether the threats are actual risks to the organization.
Activity insights: These include information about all aspects of how a user accesses resources and what they do once they have access.
After the organization has applied these insights to evaluate the level of risk, as well as the potential impact on the business or an access request, the next step is taking action to manage the attendant digital risk. Ensuring that users gain access only to the applications they need, that their access is sufficient and that they are really who they claim to be at the time of access are all made possible by access certifications. When the risk is high, additional authentication can be requested—or privilege can be reduced if warranted.
When you consider undertaking a Zero Trust project, think about the broader perspective of managing digital risk. Look to support Zero Trust in a way that allows for the dynamic, continuing assessment of risk and that enables the business by continually applying visibility, insight and action to protect your most valuable assets.
 Gartner, Zero Trust Is an Initial Step on the Roadmap to CARTA, Neil MacDonald, 10 December 2018
# # #
Learn more about Zero Trust and RSA’s role in the video Zero Trust: Network Security in the Era of Digital Transformation.
Join the #TalkingDigitalRisk conversation on Twitter and social media by following @RSAsecurity
Author: Ayelet Biger-Levin
Category: RSA Point of View, Blog Post
Keywords: Digital Risk, Digital Risk Management, Zero Trust, Gartner CARTA