A few weeks ago, a friend of mine got a text from his boss:

“Hi Matt,I’m on a conference call meeting right now, can’t talk on phone but let me know if you got my text . Thanks [Boss’ name].”

If it’s not obvious, the text was a phish. There are a few tells giving it away: there’s some odd spacing issues, it’s weird to sign a text, and it’s unusual that his boss needed to speak with Matt immediately—just not by phone.

Crude as that phish was, it’s still better than many scams sent via email: the phisher knew Matt’s name, his phone number, and who he works for. Scammers can use those facts to make a more credible lure and get someone to click a link, download a file, or hand over sensitive information.

Phishing remains one of the top causes of cybersecurity breaches. That’s not because threat actors are nostalgic for old scams: it’s because phishing still works.

Troublingly, the launch of ChatGPT and other bots could make for smarter, more ubiquitous, more effective phishing lures.

What is ChatGPT?

For the last month, ChatGPT has been all over the news: teachers are worried about it, will.i.am referenced it at Davos, and there’s even advice on how to use it to apply the technology to online dating.

ChatGPT is a “a new cutting-edge A.I. chatbot” that one New York Times journalist calls “quite simply, the best artificial intelligence chatbot ever released to the general public.” Developed by OpenAI, it has the ability to generate human-like text.

While the tool may have a lot of kooky uses, some cybersecurity experts are worried about what ChatGPT will empower threat actors to do. CyberArk reported that the bot has created “a new strand of ポリモーフィックマルウェア.”

Others are worried about a cruder—but still effective—use: deploying natural language capabilities to create more convincing phishing lures.

AI Goes Phishing

Human writers and chatbots can combine to make effective phishing lures because they each bring unique strengths to the table.

Human writers understand social engineering and know how to craft messages that will be appealing and convincing to their intended targets. They also can understand the context and cultural references that would make the message more convincing to the recipients.

On the other hand, chatbots, such as language models, understand and generate human language, which allows them to mimic the style and tone of real organizations. They can also generate personalized messages at scale, which increases the chances of tricking a recipient.

When combined, human writers can create a phishing message that will appeal to a target, and chatbots can generate it at scale and personalize it to each recipient. This makes the phishing message more convincing and increases the chances of success.

We think this combination of threat actors and bots could become a real problem—but we’ve also got a few ideas on how to address it. 詳細はお問い合わせください.

Lower Barriers = More Scams

Anything that lowers the barrier of entry could be a significant threat. In time, threat actors could teach bots to leverage passwords or target emails from data breaches.

“Although ChatGPT is an offline service, what I’m worried about is combining Internet access, automation, and AI to create persistent advanced attacks” said RSA CISO Rob Hughes.

“We’ve seen how persistent prompt-bombing attacks eroded users’ attention,” Hughes continued. “With chatbots, you wouldn’t even need a spammer to craft the message any longer. You could write a script that says ‘Gain familiarity with Internet data and keep messaging so-and-so until they click on the link.’ Turning the operation over to a bot that doesn’t sign off, doesn’t give up, and works on hundreds of users simultaneously could really change the nature of phishing attacks by enabling easy to use distributed spear-phishing tools.”

Bots Controlling Bots

Chatbots, Deepfake phone scams, or botnets: whether criminals are using an audio file, an email, or some other medium, the format and the tool behind it doesn’t really matter.

What matters is that we’re developing AI tools that might be on the verge of churning out new and progressively smarter threats faster than human criminals can produce them—and spreading those threats faster than cybersecurity personnel can respond.

The only way that organizations can keep up with the rate of change is to control bots by using bots: the same underlying principles that allow a ChatGPT to write progressively better jokes or term papers can also train security systems to recognize and respond to suspicious behavior.

Risk engines like RSA® Risk AI can use data collection, device matching, anomaly detection, and behavioral analytics to check access attempts. Risk AI looks at the context of an access request—who is asking for a resource, when they’re making the request, what device they’re using, and other signals—to determine and respond to risk in real-time.

Ultimately those capabilities help security teams make more, better-informed, smarter, and better security decisions.

It’s Not Just Threat Actors

It’s not just malicious uses of AI that can pose a cybersecurity threat for organizations: deliberate enterprise uses of AI, including robotic process automation (RPA), can make far more decisions far faster than any individual human can keep up with.

Likewise, the perfect storm of multi-cloud environments and inadequate identity management is set to scale cloud security failures and expose organizations to more risks, causing 75% of cloud security failures by 2023

Again, organizations need solutions that can scale to meet the problem: RSA® Governance & Lifecycle can automate nearly everything: the solution can automatically onboard users with the right entitlements, enforce joiner-mover-leaver policies, and automate provisioning, monitoring, and reporting.

Contact us to learn more about using AI in IAM.

“Don’t focus on the headlines”

Always-on bots programmed to phish; RPA scripts running without anyone watching; lions, tigers, and bears: there’s always going to be another threat.

“Cybersecurity teams can’t always predict what the next threat is going to be or where it’s going to come from,” said Hughes.

“Don’t focus on the headlines. Focus on what you can control: educate your users, use multi-factor authentication, and move toward zero trust. They’re some of the best ways to protect yourself against the threats we know about today and stay protected against the ones that are coming tomorrow.”

###

Contact us and learn how to fight bots with bots.