An earlier version of this blog post ran on NASDAQ on June 1, 2023
When you strip away all the jargon, policies, titles, and standards, cybersecurity has always been a numbers game. Security teams must protect X many users, applications, entitlements, and environments. Organizations rely on Y many security professionals to protect those resources. They have budget Z to spend on technologies, tools, and training.
Those numbers are getting away from us. The identity universe is expanding far quicker than human actors can keep up: in a 2021 survey, more than 80% of respondents said that the number of identities they managed had more than doubled, and 25% reported a 10X increase.
And it’s not just that we’re creating more identities—we’re creating identities that can do far more than they need to. Roughly 98% of permissions go unused. Those risks scale as organizations bolt-on more cloud environments: Gartner predicts that the “inadequate management of identities, access and privileges will cause 75% of cloud security failures” this year, and that half of enterprises will mistakenly expose some of their resources directly to the public.
It’s no real wonder, then, that 58% of the time, security teams found out through threat actor disclosure that they had been breached. That’s right: more than half the time, organizations only learned they had been beaten when the bad guys told them they had lost.
Time and again, we’ve seen threat actors exploit these numbers by attacking organizations’ identity infrastructures and launching some of the highest-profile and most damaging cyberattacks in recent memory. Colonial Pipeline, SolarWinds, LAPSUS$, and state-sponsored threat actors all demonstrated how large, interconnected, and vulnerable identity infrastructures have become.
Don’t get me wrong: I don’t mean to blame cybersecurity teams for these breaches. It’s not just that their adversaries were clever, or lucky, or both. It’s not just that private organizations can’t be expected to match a nation-state’s resources. Focusing on those variables misses the point, which is that human actors cannot be expected to ensure the security, compliance, and convenience of an organization’s IT estate any longer. The speed, scope, and complexity of what we must protect have grown several magnitudes beyond what the human mind can even conceive of, let alone secure.
It’s not that the numbers don’t add up; it’s that their sum exceeds human capacity.
Given how big and complex the IT universe is today—and how much bigger and more complex it’s poised to become—it’s unreasonable to expect identity and security teams to create secure, compliant, and convenient IT universes. I don’t think humans can do that on their own.
The good news is that they don’t have to. Just as the identity universe is expanding beyond human capacity, artificial intelligence—AI—has reached a point where it can help secure the entire identity lifecycle. We’re creating new tools suited to this moment and capable of protecting the gaps and blind spots that threat actors exploit.
AI is suited to this moment because it’s great at doing something that humans have always struggled with: making sense of large quantities of data quickly.
As an example, recall that 98% of entitlements are never used. That’s likely a result of IT and identity teams overprovisioning accounts from the moment a new user is onboarded and an account is created. We have too many entitlements baked in from the start, and we can’t react quickly enough to provision appropriate access when needed.
Humans tend to see the world in coarse-grained approximations: we reckon Engineering needs access to the Dev server, the Ops team needs access to the Prod server, and that Admins need access to both. Many governance solutions are built on these coarse-grained approximations: role-based access control (RBAC) assigns privileges based on the department someone is assigned to in an organization. Marketing employees should have access to entitlements A, B, and C, whereas Finance should have access to entitlement D, E, and F.
And while coarse-grained approximations are useful constructs, they’re fundamentally at odds with the zero-trust directive to provision the bare minimum of entitlements needed to fulfill a role. Zero trust demands fine-grained, just-in-time analysis and decision-making. Getting to zero trust means having an almost molecular understanding of who a user is, what they need, when they need it, how they should use it, and why. It also demands reexamining that information nearly every moment and continuously assuring a request is appropriate.
Humans can’t operate at that level or speed. But AI can. A machine isn’t daunted by thousands of users with millions of entitlements changing every second. To the contrary, a machine can become more effective by learning from a broader dataset. While humans can be overwhelmed by that much data, machines can use it to develop stronger, better, faster cybersecurity.
I’ve said it before, but it bears repeating: we have zero chance of getting to zero trust without AI.
We’ve seen AI’s cybersecurity contributions firsthand. For nearly 20 years, RSA has used machine learning and behavioral analytics to improve customers’ authentication. Our Risk AI capability learns every user’s typical behavior, then applies contextual signals—including the time of day a user is making a request, the device they’re using, their IP address, access patterns, and other factors—to arrive at an identity confidence score and, if needed, automate step-up authentication.
And that’s just authentication: organizations can get better results, more value, and stronger security by applying identity intelligence across a unified identity platform that integrates authentication with access, governance, and lifecycle. RSA recently announced new automated identity intelligence capabilities for RSA Governance & Lifecycle. Soon, we’ll bring additional dashboarding and intelligence to our solutions and help customers understand their overall access risk posture, identify high-risk users, applications, and locations, and determine needed policy changes to better secure critical assets.
Identity has always been an organization’s shield. Identity tells us who to let in and establishes how we verify someone is who they claim to be. It dictates what our users should have access to.
Identity creates every organization’s initial and most critical defenses. But if identity is the defender’s shield, then it’s also the attacker’s target. In fact, identity is the most attacked part of the attack surface: 84% of organizations reported an identity-related breach in 2022, per the Identity Defined Security Alliance. Verizon found that passwords have been a leading cause for all data breaches every year for the last 15 years.
We can’t wait for the Security Operations Center (SOC) to step in: a rapidly-growing identity universe means more endpoints, network traffic, and cloud infrastructure for them to monitor. SOC teams already lack any visibility into identity threats like brute force, rainbow tables, or unusual user activity—it’s unreasonable to expect them to take on identity threats now that they’re becoming more pronounced.
With the SOC overwhelmed and identity under attack, identity must adapt. It’s not enough that an identity platform is great at defense. In the future, identity also needs to be great at self-defense.
We need to build platforms that do identity threat detection and response (ITDR) intrinsically—not as a feature or an option, but as a fundamental part of their nature.
Our industry is working to develop those capabilities. At RSA, we’re expanding risk-based authentication across our Unified Identity Platform to prevent risks, detect threats, and automate responses.
We must prioritize this work, because our adversaries are already using AI to hone and accelerate their attacks. AI can write polymorphic malware, improve and execute phishing campaigns, and even hack basic human judgment and reasoning with deepfakes.
Integrating AI into cybersecurity will be difficult but essential work. It will ultimately mean better, smarter, faster, and stronger cybersecurity. Our sector is in the early days of using AI to secure organizations, but the signs are promising: IBM found that organizations with fully deployed AI security and automation reduced the time it took to identify and contain a breach by 74 days and lowered the cost of a data breach by more than $3 million.
But it won’t be without its challenges: we humans face a pending identity crisis. Cybersecurity professionals will need to reimagine our roles working alongside AI. We’ll have to learn new skills in training, supervising, monitoring, and even protecting AI. We’ll need to prioritize asking AI better questions, setting its policies, and refining its algorithms to stay a step ahead of our adversaries.
Ultimately, it’s not just the technology that must evolve. It’s all of us.