Managing and Monitoring Server Log Data
Bill Kleyman, Data Center News
The data center environment is growing, and it has become even more important to properly manage server logs. Keeping an eye on servers, firewalls, appliances and even switching infrastructure event logs can help IT administrators do much more than just check for reactive issues – when log management is kept accurate, engineers are actually creating a proactive environment capable of spotting and controlling problems before they even come up. Let’s review some tips to help you make the most of server log data.
With effective log management, administrators are able to accomplish the following tasks:
•Create an audit trail for forensics analysis. There are times when an intrusion is suspected or a data loss event has occurred. A good audit trail allows forensic data center engineers to retrace the steps taken by anyone who has entered the environment recently, and then correlate that data into usable information.
•Manage and monitor intrusion. Active server log monitoring can help prevent both accidental and malicious intrusion into the system. When logs are setup properly, all vital systems are monitored and an immediate red flag can be raised if unauthorized activities occur.
•Incident containment. If an unauthorized event occurs within a data center, properly setup logs can alert engineers to the event quickly. With good log management, engineers can immediately see where the problem occurred and isolate that network or server segment so that further damage is not done.
•Proactively protect their environment. Baseline analysis and log management tools can help an organization be proactive in its security methodology. By catching holes in security or problems with existing systems, engineers can resolve issues before a serious problem occurs. This can mean the difference between simply patching a server port or having to deal with data loss if logs are not monitored or configured properly.
•Real-time alert configurations. Data centers act as the core business IT operations center. Log management is important, however, equally important is the ability to access and monitor real-time alerts. With a good alerting design, administrators can know what is happening to their environment and resolve issues without having to waste time. If an intrusion or a critical event has occurred, every second is valuable.
•Manage active network logs and create a usage baseline. Logs can also help an environment plan for the future. For example, network logs can be used to create a baseline for the existing environment. From there, engineers can see where they are lacking and how they can efficiently plan for growth.
•Create living log workbooks capable of change as IT demands evolve. Keeping an active log book that tracks all logs across an environment can help all aspects of a data center. By understanding what various systems are doing, where they are under/over performing, and how they are being utilized allows engineers to shape their infrastructure as business demands change. A live log workbook allows future data center engineers to see and learn how their environment is behaving.
It’s important to understand that every environment is unique and will have different requirements when it comes to server log management. Government regulations may require some data centers to keep their logs for a certain period of time. Other requirements may include producing an audit trail for compliance standards such as SOX or HIPAA – this is an increasingly vital process for many enterprises across multiple industries – and there can be serious liabilities in an ineffective log management process, both tangible and intangible. This can include loss of data, security breaches, or increased risk of data and environment compromise.
Enterprises that analyze their log data efficiently can easily recognize the positive value and impact on their IT and overall operations. Properly correlated log data can show data center engineers how well their environment is performing in conjunction with other systems in their infrastructure. For example, log data can show how a network switch can be better optimized for access to a storage area network. Information that is continuously gathered by log analysis and reporting tools can also help enterprises determine their existing security environment as well as cut down on costs on extensive regulatory audits and recovery measures. By keeping your log environment healthy and up to date, engineers receive needed insight into the health and accessibility of networks, systems and applications.
Although best practices should be developed by each individual organization based on its particular environment, there are some general best practices which can be universally applied.
Use third-party tools wisely. Too often an engineering team will purchase a server log management tool that they will never use properly. For example, prior to making a purchase, an inventory should be done of the existing infrastructure. From there, engineers should see if the proposed log management tools are able to handle log collection activities for those specific devices. Make sure to understand what the logging demands of your organization are – and what you want to get out of it – before spending potentially thousands of dollars on a log management tool. Another example is security – if security is a primary control objective, look for vendors who can provide proactive alerting with their software. Ineffective tools can create a log environment that makes it difficult (if not impossible) to find the data you need.
Check logs routinely. Many organizations look at logs as a reactive means of finding information or troubleshooting, instead of using that data to spot trends. But proactively checking and analyzing log data takes a concerted and disciplined effort.
Large environments must make log management a daily task to keep up with the many logs available. By checking logs daily, we are able to stay on top of our environment and look for issues before they arise. Also, by monitoring logs regularly, we are able to learn more about our environment and how it functions in conjunction with other systems. Even if compliance is not a major concern, checking logs on a set routine can save a lot of time and money should an incident occur. For example, active security alerts should be setup covering all access into an environment. If an organization properly sets up its rules, it will know if an intrusion has occurred before any damage can be done. By proactively catching security issues, companies can save thousands, even millions, of dollars with the prevention of data loss.
Create monitoring and alerting systems. Many server log management tools will have this feature built in. However, many environments set their alerting system up to only focus on compliance-based issues, ignoring other potentially important logs. When managing logs, it’s important to set up alerts for security and system monitoring that falls outside of the area of compliance as well. This way, administrators are able to see the bigger picture in the overall health of the environment. For example, core data center networking devices all have logs that can be collected, monitored and then used. By effectively utilizing these logs, administrators can quickly see port misconfigurations, security holes and how to most efficiently utilize their switches. Even more important is the fact that with good alerting, proactive actions can be taken within an environment to help keep the data center functioning and healthy.
Use an internal log management policy. With log management, regular server log analysis becomes a key day-to-day procedure. Administrators need to develop a policy that requires regular log reporting. Once these reports are gathered, they must be analyzed for consistency and for any gap in procedure. Many times an aggregate log report can show where a security feature is failing or if a system component is not properly working. For example, large enterprises will have distributed data centers with a variety of devices. A good log management policy will look at all end-point infrastructure components and relay that information back to a centralized log management tool. Engineers are then able to look at load balancers, security gateways, and other data center appliances to see any discrepancies or faults between locations. Also, with a good policy, we are able to gather reports of a long period of time. By looking at logs over a set span we can correlate very important security and system data. This helps with further intrusion prevention as well as system health management.
Test your log management. Penetration testing or internal compliance testing will help determine if your logs are gathering the right information. Even more important, through testing we are able to establish if we are collecting and alerting against the right events. It’s an opportunity to review and refine the process. Regular testing can help an organization hone its log management environment and make it even more effective. When testing is taking place, engineers not only look at the logs from a test perspective, they are also able to see if there are any unauthorized system applications running or if there was a breach that may have been missed.
Lock down your logs. Server log management should be conducted by an authorized team. Only a limited number of authorized people should be responsible for log management and logging activities. Giving access to numerous people can potentially lead to an accidental (or malicious) deletion or modification of the existing log environment which undermines the integrity of compliance or regulatory logging requirements.
Effective logging will help protect an environment
When working with log management, it’s vital to understand the process that logs play inside of a data center. Firewall, server and application logs can all work together to create a much more secure environment. Remember, by proactively scanning logs and setting up logging alerts, administrators can quickly catch security faults inside of their own environment. With this type of proactive logging activity, data center engineers are able to secure their environment from potential breaches. Even more important is the security of the corporate data. Log management plays a big role in data loss prevention. A single security breach can cost a company dearly both from a dollars and reputation perspective.
By having an effective log management policy, administrators are able to create a healthy, well-monitored environment capable of proactive data center security.
Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Virtualization Architect at MTM Technologies Inc. He previously worked as Director of Technology at World Wide Fittings Inc.