pointer

Best Practices for Network Monitoring on High-Speed Networks

High-speed networking – networks operating at 10Gbps or greater – is growing across the globe, from San Francisco to Stuttgart. In fact, while at CeBIT last week our discussions with visitors to the booth often turned to corporate initiatives focused on 40G network upgrades, and of course, the ability to monitor, analyze, and troubleshoot at these blinding speeds. And as networks get faster the data they carry is becoming increasingly hidden inside overlay networks created by VM infrastructure, software-defined networks, and layer 2 multipathing.

Given these increasing challenges, here are a few best practices for monitoring, analyzing, and troubleshooting your system when poor performance or errors occur.

Monitoring on High-Speed Networks
With today’s network speeds and complexities, network monitoring is no longer a “one size fits all” proposition. You first need to decide what it is you want to monitor. Your devices and assets? Your network performance? Your application performance? Although there may be other ways of dissecting the network monitoring market, this approach is very understandable, is consistent with current industry analyst market segmentation, and significantly simplifies your search for commercially available solutions. And it’s certainly not an either/or choice. Your company most probably needs to monitor all of these functions, although not all may be your individual responsibility. And some functions may be much more important to your business than others. What’s most important is to define your requirements based on this breakdown. Once your requirements are well defined, you can look for a solution, or multiple solutions, that meet your needs. Many solutions available on the market attempt to address several, if not all, of these areas of network monitoring, but each has its strengths and typically excels in only one area.

Analysis on High Speed Networks
When we talk about analysis on high speed networks the focus shifts a bit. Although it seems a bit geeky, the best way to better define your analysis needs on a network is to reference the OSI Model and its seven layers of networking. Just to review, these are the physical layer, the data link layer, the network layer, the transport layer, the session layer, presentation layer, and the application layer. Solutions typically marketed as “network monitoring and analysis” focus on the first four layers. These solutions are often based on flow-based technologies, for example NetFlow or sFlow, and have little to no visibility into the session, presentation, and application layer. Solutions marketed as “application monitoring and analysis” specifically focus on layers five through seven, leveraging application characteristics instead of network data as the basis of their reporting. Each solution type does have some overlap, but this is definitely an area where one class of solution excels over the other, depending on your needs.

Solutions based on packet capture and analysis are designed to address all seven layers of the model, and do so quite well. A packet capture solution will passively analyze each and every packet on the network, compiling data corresponding to all network layers. But again, no one solution excels in all areas, and although a packet-based solution will certainly give you the most breadth, they still tend to excel at network layer analysis (layers one through four) versus application layer analysis.

We are holding a webinar on March 20th at 8:30am PDT to discuss this topic further. If you have any specific questions that you would like us to address during the webinar, please post a comment. We’ll also have some time after the webinar to discuss any questions that you may have.

Blind Spots on High Speed Networks
Virtual servers are now commonplace. Virtual storage is taking the IT market by storm. And the virtual data center and virtual networks are visible on the horizon. Virtualization provides tremendous efficiencies, reducing the cost of equipment, management, and even utilities. But as with most technological shifts there are consequences, especially in network analysis, that must be addressed. Virtualization, regardless of the “flavor,” creates a blind spot – a loss of visibility into traffic between virtual applications or virtual systems – when using traditional network analysis products and techniques.

Blind spots in virtual servers are the easiest to address. These exist when inter-application traffic all resides within the same virtual host. Since the traffic never hits a traditional hardware NIC, capturing the data through traditional means is ineffective, and the data is essentially hidden. A simple solution is to carve out a small segment of the virtual server to run packet-based network analysis software. Since the software is running on the VM, it has access to the virtual NICs that are part of the VM, giving the software access to traffic that is communicated between each of the virtualized applications on the server.

Virtualized storage is less prone to this problem as the storage systems are physically separated from the virtual application servers (in most cases) so the traffic between the applications and the storage systems traverses a physical hardware NIC, providing a traditional capture point for network monitoring data.

Virtual data centers and virtual networks are just beginning to be deployed, and mostly by cloud service providers and not in traditional enterprise networks, though the technology will certainly filter down to the enterprise at some point. Virtual data centers and virtual networks employ new standards, like VXLAN and NVGRE, which encapsulate traffic in distributed VMs, a core technology in virtual data centers. To monitor traffic in these environments, not only does the solution need to have access to virtual NICs, as it does when monitoring virtual servers, it also needs to strip the actual TCP/IP traffic out from these new encapsulation layers to be effective. Since this area of virtualization is very new, network monitoring products are just beginning to address these challenges.

Networks are becoming faster, and much more complex. The days of a “one size fits all” network monitoring and analysis solution are gone. Network analysis requirements must be very carefully considered, and multiple solutions are typically needed to address these complex environments, particularly in large enterprises. But by breaking down your network analysis requirements as defined in this blog you’ll be able to quickly find the solutions you need to best address your specific needs.

One thought on “Best Practices for Network Monitoring on High-Speed Networks

Leave a Reply