When it comes to network bandwidth and data speeds, for the most part we are still living in a 1G to 10G-transition world. That’s not to say that no one has invested in 10G or even 40G, but the pool of enterprises that are moving in this direction is still quite small.
It is not a technology issue – there are plenty of products that are out there and ready to handle 10G. The issue is cost. Businesses just don’t need 10x the bandwidth and are not willing to pay 3x the cost. Instead they are opting for multiple 1G channels. The Dell’Oro Group recently published a report on the underwhelming statistics for companies moving to 10G.
There is, however, one common use case for enterprises moving to 10G. Below we illustrate this common use case, and some best practices that have stemmed to monitor and analyze data at 10G.
Common Use Case for Moving to 10G
One of our customers, a large manufacturing company, made the transition because of significant growth in their backup traffic. They were saturating 2x and 4x Gig channels when their backup would kick in at night so they started looking to virtualized architectures for help. After that transition, they needed 10G to support their virtualization server clusters, which had grown tremendously. Although this particular example is with a manufacturing company, this is one of the most common reasons we hear across many industries for initially making a switch to 10G – too much backup traffic, and not enough time.
What to Think About Before You Switch to 10G
Migration to 10G usually happens first at core switches and in the network backbone – and the same is also true if you’re already at 10G and planning a move to 40G. As we mentioned above, cost is a huge factor in folks holding back, and it is not only the equipment costs. Cabling, rewiring, and increased power consumption can all be issues, depending on the magnitude of the migration, so don’t forget to factor in ALL the costs.
Network Monitoring and Analysis at 10G
As part of your migration plan, remember to upgrade your network analysis and troubleshooting solution(s) as well. Network analysis at 10G is a completely different beast, and a beast it is! The days of point-and-shoot, real-time analysis go by the wayside at 10G. Network recorders are a must. Installed at strategic locations, like WAN links, data centers, etc., network recorders keep an ongoing record of all your network traffic. If you start getting alerts, or alarms go off, you simply need to rewind your network traffic to see exactly what the problem was. If a particular user is having problems or you need to retrieve packet-level data (network OR application) for a compliance investigation, again, the data is there and instantly searchable and retrievable.
With 10G 24×7 recording and monitoring are requirements. Simply connecting an analyzer after a problem is reported is just not viable – there’s just too much data, and trying to reproduce problems is a nightmare. You need to capture the traffic – packet-by-packet – so you can immediately recreate the conditions to analyze and solve the problem.
With 10G you also have to streamline and condense what you want to analyze. Attempting to monitor and analyze every type of data that streams through your network can be tedious and there is a key limiting factor – storage capacity for all that data. If you do not need to analyze VoIP data, for example, then take it out of the mix. It will reduce the overall storage required and typically increase overall speed of network traffic that can be processed and stored.
2013 might not be the year for you to transition over to 10G, but it is the year to start planning and thinking about what that transition will look like, especially if you are looking to move to a more virtualized data center. If you are interested in learning more about best practices for moving to high-speed networks and the new role that overlay networks play in this transition, check out our webcast “Packet capture in high-speed and data center networks.”