It’s no secret that networks are becoming faster and much more complex. As network speeds surge to 10G, 40G, and even 100G, it’s impossible to properly troubleshoot network traffic with traditional, “on demand” techniques. Faster-speed networks demand full visibility and 24×7 uptime assurance, which often requires companies to adopt new approaches to network monitoring and analysis.
According to a recent survey by TRAC Research, 41% of respondents say that current network analysis solutions are unable to support 10G networks. As a result, these companies are finding it harder and harder to accurately assess the data traversing their company networks. Our own recent survey found that 43 percent of companies cite limited or no network visibility as the biggest challenge in transitioning to faster speed networks.
So, how do companies meet these challenges head-on, and how do they address the perceived lack of data visibility at 10G+ network speeds? We’ve outlined several ways companies can make the transition while ensuring they’ll have data visibility, and detailed troubleshooting, that’s compatible with today’s high-speed networks.
New Approaches to Network Monitoring and Analysis at 10G
Many companies have attempted to transition to 10G network analysis by simply applying the same technologies that have proven successful in 1G environments. What network engineers and IT directors fail to realize is that 1G monitoring and analysis solutions do not scale adequately to address the high performance of 10G networks, and this often leads to loss of data for analysis at critical moments and delayed inspection time.
New network monitoring and analysis solutions require more powerful platforms, with dedicated interfaces for high-speed data capture. Driven by the massive increase in transactions and data traversing the network, they must be capable of capturing and archiving data in real-time, while also providing real-time feedback on overall network performance, so the data is always available for detailed, packet-based analysis.
Addressing the Data Visibility Challenge
Maintaining network performance in a 10G environment means monitoring more points in the network, and at greater speeds, for more detailed information. Traditional network analysis at 1G collects data through traditional SPAN ports, mirror ports, or taps. Should a problem occur on your network, you simply connect your network analyzer, start a trace, and determine the problem. And if a problem occurred in the past, you either attempt to replicate the problem and solve the issue, or wait for it to happen again.
However, at 10G, it’s a different story. Organizations generate large sets of data on the network at faster speeds, and it’s highly impractical to recreate problems. Instead, organizations must identify key analysis points, install a solution that can monitor the network 24×7, and record all data so that in the event of a network glitch, you can rewind your analysis to properly identify the issue.
Capturing and Storing Every Single Packet with no Packet Loss
This bring us to our last point – being able to capture all of your 10G network data isn’t sufficient; you also need to store the data so that it’s available when you begin identifying issues on the network.
On 1G networks, traditional analysis required a lot of overhead to store network packets, typically using the underlying file system to create and save files, limiting the performance and storage capacity of the appliances.
At 10G+ speeds, organizations must take full advantage of disk space and should even consider connecting their network recorders to a SAN for additional data storage. In return, you’re able to look back at data overnight, and even over a weekend.
Now that we’ve discussed our thoughts on how to better gain visibility into the data traversing your network at 10G+ speeds, we’d love to hear your thoughts on challenges with data visibility at 10G+. And be sure to check out our upcoming webinar for tips on dealing with the transition to faster-speed networks.