pointer

Five Common Mistakes Enterprises Make When Switching to Faster Networks (10G+)

As we’ve discussed in previous posts, many of our customers are moving to 10G. As we help them on their journey, we’ve come across a few common errors that folks make in the process. Below are the most common mistakes that we’re seeing. If you are looking to make the move, or maybe you are there already, be sure you are not doing the following.

1. Using a 1G network interface card for network analysis
Many of our customers try to use a standard 1G network interface card to capture10G traffic when they make the transfer to 10 or even 40G. These cards are not up to the task of lossless 10G packet traffic. Even if your typical utilization is under 1G, you need to have a dedicated NIC that is specifically designed for packet capture to ensure lossless, 24/7 data capture.

2. Averaging and data sampling
Today, many enterprises monitor and record statistics from their 10G network segments using flow-based data (NetFlow, sFlow, IPFIX, etc.). Though quite useful, flow-based monitoring is not always 100% accurate since data sampling is often employed to reduce the computational load on the device supplying the data – the switch or router. Also, to maximize reporting periods and minimize data storage requirements many solutions time-average the data after a period of time, further limiting the accuracy, and usefulness, of the data. This tactic sacrifices key details on how your network is functioning for architectural convenience. If you are in charge of providing a detailed analysis of your network either to perform security forensics or to fix granular problems, you will be running around in circles. Not to mention, if you are a service provider and have to offer a service-level agreement to your customers, your reports could be insufficient and may not show you networking trends that are causing long-term issues with performance for your customers.

It is important to find a solution that can capture data 24/7 and analyze and store the data with one-minute granularity, always. This will help you thoroughly understand how your network is running over time and will also provide you the details you need to solve intermittent problems.

3. Improper capture points
If you are monitoring a physical network connection, your capture points are pretty obvious, especially when dealing with a network backbone. However, with the increased volume of east-west traffic due to virtualization, which is becoming more common even in 10G deployments, you may need to adjust your monitoring points, or add some to maintain full visibility.

The best way to approach a distributed virtual environment is to add a virtual tap into the architecture and use it as the connection point for your network analysis appliance.

4. Capturing your data without prioritizing
With 10G, you have a tremendous amount of data to collect, and it is essential to prioritize. Without prioritizing, you’ll stumble into a lot of problems when you troubleshoot. Any amount of data that you can filter out increases the overall throughput of data you can monitor and extends the range of your available storage. Filtering out unneeded data will significantly reduce the overall volume of data. Know your network and know what data you don’t need so you can significantly reduce your network analysis complexity.

5. For Fun: Baseline
It is always important to understand how your network should perform before moving to 10G. Before you transfer over to a larger backbone, you’ll want to establish clear baselines of your network, so you can continuously test it and make sure it is performing up to your business standards.

Leave a Reply