pointer

Tag Archives: Virtualization

Improving Visibility in Virtual Networks

Over the past decade, as networks became increasingly complex and crucial for businesses across all verticals, the IT industry developed new ideas about network design. Virtualization was born out of that process, as businesses—particularly data centers and enterprises maintaining large networks—sought to curb ever-growing hardware-related expenses. Cloud services, built on visualization technology, gradually made their way into the mainstream during that time as well.

Indeed, using virtualization to separate control plane from data plane functions and allowing managers to maintain networks from one or just a few screens can help businesses reduce IT expenses by limiting the space, power and cooling requirements for hardware. But virtualization also limits network visibility, making it nearly impossible to monitor new kinds of traffic, which include:

  • Invisible—remains in the rack and never travels across the physical network, making it difficult to capture
  • Intrahost—involves one physical host and several virtual machines
  • Interhost—where multiple virtual hosts are connected on a virtual fabric, resulting in a collection of virtual systems with only one physical connection to the network

So what can businesses do? Continuing to scale hardware is financially unsustainable, but operating without network visibility is too risky. For businesses stuck between these two undesirable options, WildPackets virtual network analysis solutions can bring back the real-time troubleshooting capability that technicians need to do their jobs effectively. Such bleeding-edge technology continuously monitors critical applications running in virtual environments and notifies IT immediately if a problem arises. The solution also records all network traffic, which can be reviewed at any time.

Virtualization presents new difficulties for network professionals, but these problems do have solutions. If you’re interested in learning more about virtualization and its impact on network monitoring, click here to download our white paper entitled “Managing Networks in the Age of Cloud, SDN, and Big Data: Network Management Megatrends 2014.”

The Challenges of Virtualization in Higher Speed Networks

The proliferation of virtualization coupled with the increase in 10G, 40G, and 100G networks has created blind spots in network and application monitoring. While virtualization has been widely adopted as a means of cutting costs and increasing efficiencies, allowing organizations to respond faster to changing business demands, the lack of network visibility increases the challenges in diagnosing and analyzing performance issues, both network and application Air Blower.

Preventing performance issues and outages in such environments is critical to maintaining the pace of business, and as networks grow, monitoring and managing network performance becomes increasingly complex and expensive. Therefore, IT administrators must work to ensure their company’s networks can rapidly scale and deliver computing resources efficiently.

The issue is further highlighted by the results of our recent survey, The State of Faster Networks, which found that 43 percent of respondents reported limited or no network visibility as their biggest challenge in their transition to 10G+ networks. To combat these challenges, respondents stated they need more real-time statistics and faster forensic search times, two capabilities that become even more important in virtual environments.

Virtual servers remain a very tempting target for security breaches. An attacker only has to compromise one layer in order to gain access to many different layers. And with the introduction of blind spots in virtual systems, the potential for an organization to remain in the dark about security vulnerabilities is even higher.

So, what causes these virtual network blind spots? In traditional network analysis, physical LANs and physical Ethernet adapters connect directly to a physical appliance. However, in the case of virtualization, applications may be communicating with each other without ever accessing a physical network port. This traffic never leaves the virtual machine, and there is no practical way to monitor or manage this internal virtual network traffic.

Solutions for eliminating the blind spots vary, depending on the complexity of the virtual environment. For stand-alone virtual servers, a software probe that runs as one of the virtualized applications is often enough to capture and analyze the traffic across the entire virtual server, offering a cost-effective solution to eliminate blind spots within the server. For more complex systems consisting of multiple servers or blades across a virtual backbone, a dedicated network analysis appliance is the best solution for gaining visibility into the entire virtual system. If the system being used offers the capability to span virtual switch ports, enabling this feature will allow the network analysis appliance to directly connect to the virtual network traffic. If not, third-party virtual taps can be used to tap the virtual traffic and make it available to external network analysis appliances.

If you are working in a virtual environment and encounter problems capturing data, view our webcast, “The Blind Spot in Virtual Servers: Seeing with Network Analysis.” With the tips you’ll learn, you’ll be on your way to a more efficient and reliable network analysis solution in no time.

The Growth of Data on the Network and What You Should Do about It

More applications, more devices, and server virtualization adoption are all key contributors to the growth of data on networks. Recently, we came across an Infonetics research report that showed that the demand for higher-speed ports (10G, 40G, 100G) rose 62% from 2012. Not a surprise really for anyone in the networking industry.

Data on networks are colossal, and growth continues seemingly unabated.

So what does that mean for a network engineer in terms of monitoring and analyzing data? How should your habits and practices change?

Below we provide four key tactics that network engineers should abide by when handling increased network data.

Continuous Capture
With network backbones either at 10G or greater, it is essential to capture data 24/7. Traditional network analysis in the form of portable troubleshooting is no longer an option. By the time you dig out the network analyzer, find the right port(s) to monitor on the 10G switch, and get things fired up the problem is ancient history. And most laptops aren’t going to have a 10G card in them, and even if they do “standard” network interface cards (NIC) are not up to the task of lossless 10G packet capture. At 10G, you need dedicated hardware that can capture data 24/7 for easy troubleshooting the instant an issue occurs.

Check out this video for more details:

Adequate Storage
Network analysis at 10G requires not just new hardware and 24/7 monitoring, it also requires a different approach. Detailed, real-time analysis is just not practical at 10G – and it’s not required since the problem you’re looking for only encompasses a small subset of the data. What is required is ongoing recording of all network data (packets) so you can “rewind” to the timeframe of interest and perform a more targeted analysis of the specific problem. To do this, you need to store all of this packet data so it’s available when you begin your investigation. For example, if you’re recording at a full 10Gbps, and you have 32TB of disk space in your appliance, you can record about 7.0 hours of network data. Fortunately, even on a 10G network segment, you’re not going to find 10Gbps steady state on the line, so you should have enough storage space even If the problem occurs overnight. However, if you need storage for an entire weekend, you need to carefully plan your disk space against your expected aggregate traffic. One solution is to add an aggregation tap in your network infrastructure. This helps by sending packet data to multiple appliances and increases to overall storage available for heavily utilized high-speed networks.

Proper Capture Points
If you are monitoring a physical network connection, your capture points are pretty obvious, especially when dealing with a network backbone. However, with the increased volume of east-west traffic due to virtualization, you may need to adjust your monitoring points, or add some, to maintain full visibility. The best way to deal with this in a distributed virtual environment is to add a vSwitch into the architecture and use it as the connection point for your network analysis appliance. For more details on this tactic, check out our blog “Where to capture packets in high-speed and data center networks.”

Prioritization
Prioritizing the data you collect is key. Any amount of data that you can filter out increases the overall throughput of data you can monitor and extends the range of your available storage. For example, if you have a lot of web traffic on your network (and who doesn’t), and it’s all encrypted, why not slice all of the payloads off the data? This will significantly reduce the overall volume of data. Or perhaps backups are the biggest source of overnight network traffic. Again, you really don’t need the payloads of backup traffic; you really just want to know that it’s happening and perhaps log some metrics like the latency of the transfers. By leveraging what you know about your own network you can significantly reduce your network analysis needs, and either save money or extend the capabilities of your existing assets.