How do you know if your cloud security controls are adequate? How would you know if you have a security breach in progress right now? You may have diligently planned and implemented strict security policies and mechanisms to prevent an intruder from accessing your servers and data in your cloud environment, but how can you be sure that these are sufficient — and that no security breach has occurred?
The challenge with cloud security (or any security, for that matter) is that you are only aware of a gap in your security framework after a breach has occurred. Immediate and timely security incident detection is critical to minimize harm or negative impact. Leveraging the information that is stored in system or device logfiles is critical for early detection. Logfile management is a key control aspect in quickly detecting cloud security incidents. It is the process of capturing and analyzing security logs to detect possible breaches or attempts at breaking in. More specifically, logfile management involves the process for generating the logfile data, transmitting the data to a centralized repository, analyzing and interpreting the data, and securely storing the data. In this blog post, we explore these four key activities involved in having a successful logfile management process for a cloud environment.
Regardless of the tools you use, or the management process that drives your logfile management, the following four activities need to be performed in order to have an effective cloud security management framework.
1) Configuring the Generation of Logfiles
A variety of IT or cloud components can generate a logfile with security related data. Possible sources can include: operating systems, routers, firewalls, antivirus software (server and desktop), Intrusion Detection Systems (IDS), Intrusion Preventive Systems (IPS), database, middleware, and business application software. Each of these sources has configuration settings that determine what to log, how much detail, and how often to log. For each source, an administrator must determine the correct settings in order to capture the right data. The security policies will generally guide the administrators as to the proper level of data capture.
2) Transmitting Logfiles to a Central Repository
It is necessary for all log data to be sent to a central repository for proper analysis. Some security breaches can only be detected when data from multiple sources are reviewed and correlated together. What may appear to be any innocent failed login attempt at one server may prove to be the same attempt found across hundreds of servers. This will only be detected if all distributed log data is viewed as a whole.
3) Analyzing and Correlating the Log Data
Once the log data has been centralized, it can then be analyzed and correlated. Generally this requires software tools. Due to the large volume of raw data available, it is impractical to perform this step manually. Specialized tools such as syslog analyzers and Security Incident and Event Management (SIEM) software are required to process all data in a timely manner.
4) Securely Storing the Log Data
Since the log data may contain information about a security breach, this data must be protected and secured. It may be the source of legal evidence in the future, supporting evidence in an audit, or the basis for forensic analysis. For these reasons and others, the collected log data and its analysis must be securely stored ensuring proper confidentiality and integrity.
Related Courses
Cloud Essentials
Cloud and Virtualization Essentials
Cloud Security Manager