Category Archives: TimeLine Network Recorder

What Types of Virtualization Are Most Vulnerable to Network Blind Spots

Virtualization helps companies to streamline application deployment, simplify IT operations, and allow IT organizations to respond faster to changing business demands. However, there is a downside when it comes to network analysis. Virtualization introduces network blind spots – areas of application traffic that never cross a physical network interface – which is the typical connection point for network analysis systems.

There are ways to make these network blind spots visible, but to get the right prescription you need to first determine the type of virtualization that needs to be addressed.

Standalone VM Systems
Standalone VM systems have multiple VM guests, but a single VM host. The host is the physical hardware, while the guests refer to the virtual machines running inside of the physical server that can be used to run various applications which share the overall hardware platform. Standalone VM systems are how virtualization got started, and this is probably still the most common type of virtualization in the market. In standalone VM systems you may have one or more virtual network interfaces (vNICs) per guest and one or more physical network interfaces (pNICs) per VM host, creating a complex flow of both virtualized traffic and physical traffic.

A blind spot will occur in standalone VM systems when processes are communicating between different guests inside the same overall physical system. To remedy this, you’ll want to run additional software, like OmniVirtual, as a part of the virtual system to gain access to the traffic crossing the virtual NICs. This will capture inter-VM traffic and provide you with the visibility necessary to properly analyze both your network and application performance.

Coordinated/Distributed Virtualization
In the case of coordinated or distributed virtualization, the system consists of multiple VM hosts connected via a virtual distribution layer, making both inter-VM guest and inter-VM host traffic invisible to traditional network analysis techniques.

To address this more complex situation, a virtual tap is necessary. This is software that acts like a traditional hardware network tap, running at the level of the hypervisor. It allows users to tap into the vNICs and virtual switches that are connecting the various hosts, and gain access to the packet streams of interest for detailed network analysis. Since the virtual tap runs in the VM layer itself, it is typically vendor-specific so keep that in mind when researching virtual taps. Once a virtual tap is in place, network recorders like WildPackets TimeLine or Omnipliance Core can be connected to the virtual tap and capture network and application traffic as if physically connected to the virtual layer.

There are three different cloud scenarios, two of which have similar blind spot remedies to our previously mentioned ones in the sections above: in-house and third party. For an in-house cloud server, this is similar to distributed virtualization in that you’ll have physical access to your systems and can add technologies like virtual taps. For a third-party cloud solution, or Infrastructure as a Service (IaaS), you can install your own software – like OmniVirtual – to perform network analysis on remote VM hosts.

The only time when it is truly difficult to perform network analysis in the Cloud is in the case of Software-as-a-Service (SaaS), or hosted services. Here, you are essentially at the mercy of the service provider, as you will not have access to virtual servers to install software of your own.

Regardless of your network virtualization type, there is almost always a solution to gain full visibility into your network. Just follow the steps above or watch our ondemand webcast for further details.

Scale Your Network Visibility with WildPackets

Scalability is an issue that’s coming up more and more frequently as 10G and 40G networks grow in popularity. As networks grow in size, the ability of network analysis solutions to either handle the growing amount of data or to accommodate the growth is telling of its scalability.

Network growth results in more network analysis through increased analytical throughput, scope, data storage, and distributed analysis. As your network grows and you encounter these issues, there are ways to scale your visibility so that you’re not looking for a needle in a 10G haystack.

Architect for Visibility
As always, knowing your network is key. Know what traffic is important to your company. Is it mission critical business applications, like order entry, financials, and CRM? Or is it web-based traffic that’s driving your online retail business? Once you decide what, and how much, of this traffic requires ongoing monitoring and analysis, you’ll know where to look to specifically identify the traffic that you’ll want to capture. Building visibility into your network infrastructure can help both of these practices. Through strategic placement of analysis points, you’ll be able to get instant information to fix problems faster.

Visibility includes both summary level monitoring data and detailed network metrics, including visibility into network packet traffic and even specific packet decodes. Only a packet-based network analysis system, like the Omni Distributed Analysis Platform, provides the complete range of visibility required to monitor and troubleshoot today’s high-speed networks, keeping networks running smoothly and guaranteeing the very best end-user experience.

Backbone Visibility
Though often the fastest link in your infrastructure, the network backbone – the aggregation of all your distribution layer networks – can be an excellent point for monitoring network traffic and capturing network data for more detailed analysis. Depending on your overall network architecture, the network backbone may be a roll-up of just about all of your critical network traffic, especially if traffic is driven through a centralized network operations center (NOC), or if your company is a heavy user of cloud-based or other third party SaaS applications that drive network traffic through your WAN link. Using high speed network monitoring appliances on the network backbone can centralize your network monitoring and analysis, and save money by consolidating network analysis into a single appliance.
The aggregated traffic on the network backbone will typically be high speed, with more and more enterprises migrating to 10G backbones. Packet-based network analysis on the backbone means you’ll be interested in all of the packets, so you will likely need an appliance like WildPackets’ TimeLine network recorder, which captures at rates up to 12Gbps with zero packet loss. Timeline network recorder allows you to store all your data for forensic analysis while continuously capturing network traffic. And if you’re already migrating your backbone to 40G, you can simply add an aggregation tap and a few more TimeLine appliances for a complete 40G solution.

Adding Visibility to Virtual “Blind Spots”
Traditionally, north-south traffic was the most important in network monitoring. However, with the explosive growth virtualization, east-west traffic is becoming more and more important in enterprise networks, and poses a new challenge in network and application performance monitoring. East-west traffic is typically traffic moving within a virtual host or a distributed virtual system. Since much of this traffic resides solely within the virtual environment, and therefore never hits a physical network interface, traditional network monitoring and analysis that is done by tapping into the physical network does not capture this east-west traffic. For example, let’s say the order entry system and the inventory database reside on separate VMs within the same host or distributed system. Communications between the order entry application and the database are east-west traffic. Application performance issues between these systems are “hidden” within the VM. To add visibility, you can either install WildPackets OmniVirtual on one of the VMs to gain visibility into the entire host, or, in the case of larger, distributed virtual systems, the use of a virtual tap is recommended. Virtual taps are sold by many tap vendors, and they provide a physical link that traditional network monitoring appliances can access to expose east-west traffic within the virtual system.

For more information about how WildPackets can help scale your networks, check out our ondemand webcast.

10G Upgrades: Fact or Fiction?

When it comes to network bandwidth and data speeds, for the most part we are still living in a 1G to 10G-transition world. That’s not to say that no one has invested in 10G or even 40G, but the pool of enterprises that are moving in this direction is still quite small.


It is not a technology issue – there are plenty of products that are out there and ready to handle 10G. The issue is cost. Businesses just don’t need 10x the bandwidth and are not willing to pay 3x the cost. Instead they are opting for multiple 1G channels. The Dell’Oro Group recently published a report on the underwhelming statistics for companies moving to 10G.

There is, however, one common use case for enterprises moving to 10G. Below we illustrate this common use case, and some best practices that have stemmed to monitor and analyze data at 10G.

Common Use Case for Moving to 10G
One of our customers, a large manufacturing company, made the transition because of significant growth in their backup traffic. They were saturating 2x and 4x Gig channels when their backup would kick in at night so they started looking to virtualized architectures for help. After that transition, they needed 10G to support their virtualization server clusters, which had grown tremendously. Although this particular example is with a manufacturing company, this is one of the most common reasons we hear across many industries for initially making a switch to 10G – too much backup traffic, and not enough time.

What to Think About Before You Switch to 10G
Migration to 10G usually happens first at core switches and in the network backbone – and the same is also true if you’re already at 10G and planning a move to 40G. As we mentioned above, cost is a huge factor in folks holding back, and it is not only the equipment costs. Cabling, rewiring, and increased power consumption can all be issues, depending on the magnitude of the migration, so don’t forget to factor in ALL the costs.

Network Monitoring and Analysis at 10G
As part of your migration plan, remember to upgrade your network analysis and troubleshooting solution(s) as well. Network analysis at 10G is a completely different beast, and a beast it is! The days of point-and-shoot, real-time analysis go by the wayside at 10G. Network recorders are a must. Installed at strategic locations, like WAN links, data centers, etc., network recorders keep an ongoing record of all your network traffic. If you start getting alerts, or alarms go off, you simply need to rewind your network traffic to see exactly what the problem was. If a particular user is having problems or you need to retrieve packet-level data (network OR application) for a compliance investigation, again, the data is there and instantly searchable and retrievable.
With 10G 24×7 recording and monitoring are requirements. Simply connecting an analyzer after a problem is reported is just not viable – there’s just too much data, and trying to reproduce problems is a nightmare. You need to capture the traffic – packet-by-packet – so you can immediately recreate the conditions to analyze and solve the problem.

With 10G you also have to streamline and condense what you want to analyze. Attempting to monitor and analyze every type of data that streams through your network can be tedious and there is a key limiting factor – storage capacity for all that data. If you do not need to analyze VoIP data, for example, then take it out of the mix. It will reduce the overall storage required and typically increase overall speed of network traffic that can be processed and stored.

2013 might not be the year for you to transition over to 10G, but it is the year to start planning and thinking about what that transition will look like, especially if you are looking to move to a more virtualized data center. If you are interested in learning more about best practices for moving to high-speed networks and the new role that overlay networks play in this transition, check out our webcast “Packet capture in high-speed and data center networks.”