On my way to a conference last week, I sat next to a kid playing “Where’s Waldo?” And I thought to myself… if only the security analyst’s life was as easy as finding Waldo.
In reality, however, what if it is the “regular” person standing in plain sight right next to Waldo that I’m actually after, and I’m spending all my time on a goose chase for the wrong person?! Think about it for a minute, I can easily write a rule locating a guy wearing a white and red stripe shirt, a silly hat, and blue pants. I would find Waldo with no false positives. However, there is NO rule I can write to identify anything else in the image surrounding our friend Waldo:
- Compared to Waldo they all look different in every scenario they are featured in, and
- Everyone else (excluding Waldo) in that image all looks the same!
This is the reality security operation centers (SOC) face daily as they fight the uprising of internal threats. By internal I’m referring to insider threats as well as external threats that already have a footprint in the environment (e.g. leveraging compromised credentials).
Let’s take the recent Tesla Saboteur case where an employee messed with source code and exfiltrated data. This case highlights the challenges and complexity organizations deal with on a daily basis when trying to identify, not to mention predict, malicious users and their actions.
Even mature SOCs with all the right tools in place can be misled by a malicious insider due to the low and slow - in many cases even legitimate (not benign!) - behaviors and actions. However, there is a way to identify these attackers – when looked at in aggregate:
- An indicator that across time the user is behaving differently.
- The user logs in from a new location
- Some deviation from baseline behavior (time, location, velocity, periodicity)
- User patterns mismatched to peer group or organization-wide behaviors
- And many more…
In what way? It depends. Each scenario is different - same as “Where’s Waldo?” Attackers may not wear the same white and red striped shirt every day, they may not use malware in many cases, and they may not be on the blacklisted known IOCs. However, what they are - especially at the point of doing something malicious – is different than the normal user. This is where artificial intelligence, in its machine learning form evaluating user behaviors, can come to the rescue.
Before going down the path of User and Entity Behavior Analytics (UEBA), I’ll highlight that I believe UEBA extends the organization’s ability to reduce mean-time-to-detection (MTTD), as an addition to existing, required detection capabilities. An old, but very relevant layered approach is key here. In this way SOC teams are able to:
- Block and tackle known attack vectors regardless of the terrain they are operating in – traversing the network or getting their footprint on critical endpoints
- Watch the watchers (especially the privileged ones) for any malicious abnormal activities
- Cross-correlate indicators from all data sources (user-endpoint-logs-network, for example admin user uses infected host laterally moving and exfiltrating data)
- Attribute to the extent possible, leveraging in-house and 3rd party threat intel to better identify attributed TTPs
- Detect unknown patterns, beyond a point-in-time investigation, focusing on network traffic anomalies, irregular endpoint data, and abnormal user actions originated from native data collection technology
Needless to say, all of the above comes second to visibility. You cannot detect what you cannot see! One of the major common challenges with detection, especially when we’re talking about machine learning UEBA, is the unattended log, network, and endpoint data that isn’t ingested into the analytics engine – causing blind spots. In many scenarios, the solution of choice doesn’t have native and mature collection capabilities leaving the organization to map between metadata collected and metadata calculated separately, which can lead to a long and tedious, customized project.
Rogue employees highlight the need for evaluating user information during incident investigation. Beyond who the user is to what the user is doing and whether it’s in line with the user’s and peer group’s past behaviors. Not just a point-in-time alert, but continuous monitoring of identity-based activities. Once a potentially malicious abnormal behavior is detected using unsupervised mathematical data models not dependent on previously known patterns or behaviors, a security analyst can take immediate action to see the exact anomaly and additional context to elevate the incident priority as needed.
Up-level your SOC by extending existing detection, investigation, and remediation capabilities with UEBA, all under the same platform. Provide security analysts with clearer answers as opposed to more open-ended questions!
# # #
RSA NetWitness UEBA starts getting smarter the moment you turn it on being able to spot anomalous behaviors quickly, accurately, and without constantly demanding your attention to fine tune. Validate that YOUR SIEM CAN DO THIS!