In the previous blog post we looked at Network Packets (PCAP's) and how they can be utilized within a SOC environment. In this post we will build on this and take a look at Logs (which most of the security sales staff will now tell you that it is going to solve all your security problems today).
Computer Logs have been available since the very first days of ARPANET (granted those ones were created manually by the engineer!). All operating systems and applications since the very earliest days of computer system have kept diagnostic logs and over the years these have developed into what we now know as Event Logs.
However for years these logs were viewed only as a problem solving solution and had no real security benefit, nobody was reading the logs. Gradually over time the benefits of reviewing log sources were identified as a result of many different compromises and investigations. "The Cuckoos Egg" by Cliff Stole is a classic example of using logs (and pagers and reams of tele-printer pages!) to track down and identify an attacker (In Stoles' case this eventually leads to a Hacker in Hanover Germany who was selling data he had stolen to the KGB). Over the years Forensic Analysis of logs has led to a better understanding of their usefulness and how they could be used to search for attacker incursions. This led to another problem; How to view all of these logs in a centralized location?
Various log management tools were utilised over the years, but most were reactive and required the analyst to trawl through logs to find specific threats. Gradually these systems evolved into Security Information Management (SIM) systems which dealt with long term warehousing and analysis of logs and Security Event Management (SEM) responsible for real-time monitoring, correlation and notification of Events and Incidents. In practice vendors were creating systems that had many features of both systems, eventually the term Security Information Event Management (SIEM) was used for the first time by Mark Nicolett and Amrit Williams in 2005 in a Gartner blog post. Today these systems form the heart of many Security Operations Systems.
Before setting about getting the required budget to implement a SIEM solution first we must consider what it is we want to detect, what logs we have available and more importantly what logs we can successfully consume and retain. It is not just about having the initial detection capability, we have to be able to roll back time and analyse historical events (which ultimately is dictated by the size of our storage capacity). We will come back to detecting threats in a moment; however first let's look at the log sources available to most SOC's.
|Type of Log||Typical Examples|
|Security and Monitoring tools||
Care must be taken when selecting the log sources as some of the above sources will generate an enormous amount of traffic (most notably Windows Workstation logs). This can result in a large overhead on network bandwidth and storage capabilities and in the most extreme scenarios degrade the functioning capabilities of the system being monitored itself.
Now we must analyse just what it is that we want to detect and as such the creation of targeted Use Cases must be implemented. These should dictate the Objective, Threat faced, Data Sources and Logic to enable the detection of said Threat (Blog 4 in this series deals with Use Cases in greater depth). Obviously the Data Sources are critical here; we must define the data sources in advance and then on-board them into our SIEM solution to enable the Use Case to function and alert correctly. However it does not end there. Once we have an alert there must sufficient additional logs to enable an analyst to successfully investigate the Incident. Below is an example of a typical scenario:
An analyst responds to a Malware detection from AV Logs. The analyst reviews all of the host AV Logs to identify the point of compromise. Proxy logs are reviewed for the time of compromise and network traffic is observed to a suspicious domain, a review of DNS logs also supports this however additional domains are also observed. Threat Intelligence Analyst identifies multiple different additional threat domains from Domain Registrant research. Blocks placed on Firewall and IDS/IPS devices. Both Firewall and IDS/IPS alert for additional interior hosts communicating with associated malicious domains, NAT Logs are required to identify the internal hosts (as the Firewall only has the details for the Proxy Server). All systems are analysed for evidence of malware. Further investigation identifies Phishing attack as the source of the malware. Email Logs identify list of targeted individuals, this list contains all of the additional hosts seen communicating with the malicious domains however a number of additional users have not clicked on the malicious link and are not affected. Threat Intelligence Analyst creates a Watchlist of ALL targeted users. This Watchlist is implemented on the SIEM to escalate any future incidents associated with the Users.
As can be seen for a very simple detection multiple additional sources are required to be able to fulfil a successful investigation. Crucial in ensuring there all of the logs line up to the same time is the Network Time Protocol (NTP) as without this all of the logs being received within the SIEM may be set to the incorrect time. The time is most important of all in the DHCP Server as without it we will be unable to correctly identify what piece of hardware had a particular IP address at any given time.
Most SIEM solutions incorporate multiple threat feeds (dependent upon cost) which enables faster detection of threats. However there are issues with this type of unfiltered information which we will look at in the next Blog post: The Realm of Threat Intelligence - Using Intelligence in an Advanced SOC
Unfortunately many organisations see logs purely as an aid to meeting compliance regulations. Typically these compliance mandates are concerned with personal identifiable information such as Social Security Numbers, Addresses, Logins, Health Records, Banking Details and Credit Card Numbers. As security staff this information would be essential in following up an identified breach. As an example the types of Windows Logs required for PCI DSS (Payment Card Industry Data Security Standard)) are:
- Account Logon - Success and Failure
- Account Management - Success and Failure
- Directory Service Access - Failure
- Logon Events - Success and Failure
- Object Access - Success and Failure
- Policy Change - Success and Failure
- Privilege Use - Failure
- System Events - Success and Failure
How long should you retain your logs for? If you are concerned with day to day Investigations and Incidents a minimum of 3 months of easily accessible logs would be sufficient for most analytical needs. However, most organisations as mentioned above are subject to compliance requirements, which can range from 1 to 6 years (In the case of PCI this is 1 year). This obviously has a massive impact on the actual storage capability of any SIEM and as a result the cost of the hardware required.
Unless your organization requires its own standalone logging system (Banking, Defense, Government and other classified systems) the future of the market is the Cloud. The likes of Amazon's EC2 and SalesCloud virtual infrastructure have been joined by Dell EMC and HP, all of which are actively selling cloud storage solutions as an alternative to their physical hardware. Running your SIEM solution from the cloud offers many advantages however there are a number of specific challenges which need to be addressed.
- For any SIEM there must be a centralised Management Console which is capable of updating or conducting configuration management of the rest of the logging platform.
- Operational Security
- Not only must the system be available at all times (utilising High Availability clusters etc.) the system must be secure from external attack.
- By far the biggest consideration of any Virtualised Logging Solution
- All the logs have to be transmitted over the Organisations Internet Access thus creating possible bottlenecks.
- Any analyst who has utilised an SIEM will be aware of how long some searches can take to complete. Virtualised networks can compound this problem if the logging servers have not been built with a hierarchy in mind which will allow for multiple virtual servers to share the processing load.
- Pre-Filtering of logs. As with any log solution establishing just what you want to monitor is key. External storage and bandwidth considerations must also be assessed for Virtual infrastructures.
- It goes without saying that an externally hosted logging solution has to be available for analysts to conduct their Investigations.
- Any solution should utilise a large buffer storage capability in case of system downtime to ensure that logs are not lost.
The days of an analyst having to walk into the server room to inspect the log files are most definitely history.