Last week, AI company Anthropic acknowledged a security incident in which a draft blog post was leaked to Fortune.
The blog post noted that its new project, “Claude Mythos,” and new model, Capybara, represent a “step change” in AI assistants. The blog touted a “general purpose model with meaningful advances in reasoning, coding, and cybersecurity.”
The Anthropic blog also warned that: “In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses — even beyond what we learn in our own testing…In particular, we want to understand the model’s potential near-term risks in the realm of cybersecurity — and share the results to help cyber defenders prepare.”
إن draft also noted that the new model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”
Claude Mythos could represent a major risk to organizations’ cybersecurity posture. The new model will follow a wave of growing risks caused by AI, agents, and other non-human identities (NHI):
- جارتنر named agentic AI the top technology trend of 2025 and predicted 33% of enterprise apps will include agentic AI by 2028, up from less than 1% in 2024.
- Other research shows that NHI outnumber human users by 45 to 1in DevOps environments.
- A recent survey found that 60%of enterprises expressed a lack of confidence in their ability to adequately secure NHI.
Whether it’s Claude Mythos specifically, ChatGPT, or some other agentic service, the talk last week at مؤتمر RSAC was focused on how AI is transforming the threat landscape.
Here are the best practices we told customers that can help organizations prepare to use AI and defend against AI.
AI-powered threats aren’t coming—they’re here. A previous Anthropic منشور المدونة explains that a state-sponsored group used the agent to infiltrate “roughly thirty global targets and succeeded in a small number of cases” by “pretending to work for legitimate security-testing organizations” to avoid Claude’s guardrails. We’ve continued to see AI-driven phishing attacks aimed at stealing users’ credentials.
Organizations need the right identity controls in place to prepare for more—and more effective—attacks. Those include:
- مصادقة بدون كلمة مرور: going passwordless can remove the credentials that AI-driven phishing attacks try to steal. Aim to deploy passwordless at scale for every user in every environment.
- Secure the authentication process: Going passwordless is a great first step to keep organizations secure from AI, but it’s only the first step. Keep the authentication process itself secure by using solutions that dynamically evaluate user signals and require step-up authentication during high-risk log in attempts, or use proximity verification to ensure that a user’s device is close to what they’re trying to access.
- Bi-directional identity verification: Deepfakes will make it easier for attackers to pose as either users or help desk personnel. Agents will help adversaries create more convincing social engineering attacks. Organizations need a simple way to account for both tactics and verify a user is who they claim to be to prevent the type of attacks that cost MGM Resorts, Caesars Entertainment Group, and Marks & Spencer مئات الملايين من الدولارات
- Identity governance and administration (IGA): During most breaches, attackers move quickly to escalate privileges and expand access. Having an advanced IGA program helps contain that risk by ensuring identities only have access to what is necessary, enforcing least privilege, and supporting a Zero Trust approach.
It’s not just enough to have security capabilities in place to defend against external AI. Organizations also need identity security controls that allow them to use it safely. The 2026 تقرير RSA ID IQ 2026, an industry survey of more than 2,100 global cybersecurity, identity and access management (IAM), compliance, and tech leaders found that 91% of organizations planned to implement some form of AI into their cybersecurity stack this year.
To use AI securely, organizations must:
- Treat every agent like an identity. Organizations need to treat every agent, bot, and AI service as an identity. They need to bring the same level of controls, permissions, and oversight to NHI as they do human users. Inventory what you have first and understand its access. And require passwordless for your agents to eliminate the risk of employees hard-coding in their passwords to act on their behalf—if you do, then you’re just releasing an insecure bot into the wild.
- Communicate your policies clearly, loudly, and frequently. This isn’t just a technology problem. Organizations need to identify what services are fair game and what resources employees can provide them. Shadow AI represents a huge risk for losing PII, financial information, and other restricted data.
- Build better IGA. To use AI securely, organizations need governance that keeps pace with the scale of identity data. That means ensuring identities only have the access they need, maintaining continuous visibility into access, and turning complex data into clear, prioritized actions. AI helps reviewers make confident decisions and gives security professionals insights to identify risk and act at scale, enabling continuous, intelligent control over access.
- Data sovereignty for AI. Organizations with enhanced compliance, security, and availability requirements should consider using sovereign deployment capabilities to ensure full control over their data, where it resides, and who can access it. Look for solutions that can support full identity and access management (IAM) capabilities in private cloud, multi-cloud, on-premises, and air-gapped configurations.