There has been much hype surrounding virtualized environments and it is safe to say that this technology is now out of the incubator and is a very real world solution to problems of scale.
As virtualization evolves, however, so do the challenges of maintaining security, cost-effectiveness, investment value, manageability, and compliance. Virtualized networks must still be accounted for and managed in much the same way that physical wired networks are managed – in fact, they are very similar from a network engineer’s perspective. Both require continuous monitoring, with the ability to dig in and troubleshoot potential issues at a moment’s notice. The difference lies in the way you choose to monitor your virtual network.
Real time analysis:
If you need to perform real-time analysis on a virtual network, the main thing to consider is buffer size. The nature of a virtualized environment means that there is a fixed amount of resources for many different applications to share. In essence, you are taking what was once a single computer and optimizing it for use with multiple applications. This allows for much greater efficiency since in most cases a typical application uses only a small fraction of the available hardware resources. Since the resources are now being shared much more efficiently, it becomes very important to set a buffer limit for network monitoring that is just enough to get the job done without compromising the execution of other applications on the virtual machine. It is also important to use the minimum number of captures possible to accomplish your network monitoring objective since each capture requires its own buffer, and consequently, your finite RAM.
Post Capture Analysis:
If you are doing post-capture analysis, your resource focus shifts from RAM to disc space. The goal in post-capture analysis is save all packets to static storage for analysis at a later time. This significantly reduces the need for buffering data in RAM for immediate analysis, but increases the need for disk space based on the overall throughput of the virtual network and the amount of time post-capture analysis is required. Assuming lower virtual network speeds of 100Mbps, you should generally allot the following disk space per time considered: 750 MB/min, 45 GB/Hr, 1.1 TB/day. Keep in mind, however, that post capture forensic searches are CPU and RAM intensive, so it is best to perform your analysis when other demands on the virtual machine are low, like at the end of the day or early in the morning. It’s also a good idea to pre-compute the maximum RAM your search will use. Ideally, you should already have some idea of what kind of data you’ll be wanting to inspect and will have search parameters in place that will pull up no more than 10% of your total captured data. Feel free to experiment and search, starting with a smaller set of analysis functions enabled until you’re sure you’ve found the data of interest. Be judicious as to the types of searches you do and have an idea of what you expect your search results to look like before performing the actual search. This will save you time and resources.
Real-time analysis is best suited when you’re facing an immediate issue, like a report of poor application response time from the virtual server. Real-time analysis provides immediate interaction with the data, allowing you to expand and contract your analysis in real-time to zoom in on the issue. Post-capture analysis is best for long-term monitoring. While the virtual network is running smoothly at the moment, you never know when something may go wrong. If you’re constantly saving network history while performing on-going monitoring, even if only for a few hours, it will make the difference between knowing exactly what went wrong, or having to spend countless hours attempting to reproduce the issue or waiting for it to happen again.
With these considerations, you’ll be well on your way to overcoming blind spots and having complete visibility into your networks, whether they’re virtual or not.