More applications, more devices, and server virtualization adoption are all key contributors to the growth of data on networks. Recently, we came across an Infonetics research report that showed that the demand for higher-speed ports (10G, 40G, 100G) rose 62% from 2012. Not a surprise really for anyone in the networking industry.
Data on networks are colossal, and growth continues seemingly unabated.
So what does that mean for a network engineer in terms of monitoring and analyzing data? How should your habits and practices change?
Below we provide four key tactics that network engineers should abide by when handling increased network data.
With network backbones either at 10G or greater, it is essential to capture data 24/7. Traditional network analysis in the form of portable troubleshooting is no longer an option. By the time you dig out the network analyzer, find the right port(s) to monitor on the 10G switch, and get things fired up the problem is ancient history. And most laptops aren’t going to have a 10G card in them, and even if they do “standard” network interface cards (NIC) are not up to the task of lossless 10G packet capture. At 10G, you need dedicated hardware that can capture data 24/7 for easy troubleshooting the instant an issue occurs.
Check out this video for more details:
Network analysis at 10G requires not just new hardware and 24/7 monitoring, it also requires a different approach. Detailed, real-time analysis is just not practical at 10G – and it’s not required since the problem you’re looking for only encompasses a small subset of the data. What is required is ongoing recording of all network data (packets) so you can “rewind” to the timeframe of interest and perform a more targeted analysis of the specific problem. To do this, you need to store all of this packet data so it’s available when you begin your investigation. For example, if you’re recording at a full 10Gbps, and you have 32TB of disk space in your appliance, you can record about 7.0 hours of network data. Fortunately, even on a 10G network segment, you’re not going to find 10Gbps steady state on the line, so you should have enough storage space even If the problem occurs overnight. However, if you need storage for an entire weekend, you need to carefully plan your disk space against your expected aggregate traffic. One solution is to add an aggregation tap in your network infrastructure. This helps by sending packet data to multiple appliances and increases to overall storage available for heavily utilized high-speed networks.
Proper Capture Points
If you are monitoring a physical network connection, your capture points are pretty obvious, especially when dealing with a network backbone. However, with the increased volume of east-west traffic due to virtualization, you may need to adjust your monitoring points, or add some, to maintain full visibility. The best way to deal with this in a distributed virtual environment is to add a vSwitch into the architecture and use it as the connection point for your network analysis appliance. For more details on this tactic, check out our blog “Where to capture packets in high-speed and data center networks.”
Prioritizing the data you collect is key. Any amount of data that you can filter out increases the overall throughput of data you can monitor and extends the range of your available storage. For example, if you have a lot of web traffic on your network (and who doesn’t), and it’s all encrypted, why not slice all of the payloads off the data? This will significantly reduce the overall volume of data. Or perhaps backups are the biggest source of overnight network traffic. Again, you really don’t need the payloads of backup traffic; you really just want to know that it’s happening and perhaps log some metrics like the latency of the transfers. By leveraging what you know about your own network you can significantly reduce your network analysis needs, and either save money or extend the capabilities of your existing assets.