pointer

Tag Archives: network performance analysis

IP Video – It’s like Living with a Teenager

Teenagers. Maybe you have one (or more) at home; maybe not. But we’ve all been one, so I know you can relate. Moody and unpredictable. Overly sensitive. Taking up more space than any human has a right to. High maintenance. They’re just so adorable.

Well, it turns out we have an exploding data type on our networks that behaves much the same way – IP video. In a recent whitepaper by Cisco, it was reported that all forms of video (TV, VoD, Internet, and P2P) will be approximately 90% of the global consumer Internet traffic by 2015. And per the report, that’s 90% of what will be 966 exabytes, or nearly a zettabyte, of IP data. To see what that looks like graphically, check out this link. Although video traffic on the enterprise side will not be as heavy as that on the consumer Internet, it will increase dramatically nonetheless, and will certainly be much more than 50% of the enterprise network traffic by 2015. It looks like you’re going to need both network management and high school guidance counselor skills by 2015 to manage enterprise networks.

With this dramatic increase in video traffic, video will be in competition with enterprise corporate data, enterprise application access, SaaS, and cloud computing. And given its tendency towards teenage behavior, you’re going to have your hands full. Below are a few details of how the characteristics of IP video can adversely affect your enterprise network.

Unpredictable
Video is “bursty,” or in the teenage analogy, unpredictable, which is an undesirable characteristic for networks that work best under stable conditions – predictable and consistent. Packet sizes range all over the place, and often hit the network in large bursts. And of course these bursts are tagged with high QoS (quality of service) tags, so they take precedence over your other mission critical application data. Characterization of your IP video traffic, including weeding out business traffic from surfing, is critical to the health of your enterprise network.

Space Hog
Video is a bandwidth hog. One HD video stream can consume up to 20Mbps of bandwidth. So if five people are trying to stream a movie, it means that they are taking up 100Mbps of your network. This may not seem like a ton of traffic, but depending on the distribution of these users on your network, and the number of users serviced, bandwidth availability can certainly become an issue. And remember, the amount of video on your network is increasing all the time.

Overly Sensitive
Video is also very sensitive to latency, jitter and packet loss, even more so than voice, which we covered in this blog post. These sensitive protocols demand that your network is performing at its peak level to ensure that these issues are minimized. As video becomes more common on the network, performance demands will continue to grow and become harder to reach. Specific metrics and demands of latency, jitter, and packet loss are described in more detail below with this video segment and graph:

High-Maintenance
Due to the high performance demands of video, it is typically tagged for the highest QoS delivery as I mentioned earlier. However, as video traffic starts exceeding data traffic, enterprises will need to maintain different quality of service between users or video types since it is self-defeating for most of the traffic on a network to have the highest QoS tagging.

As video continues to grow, or as some might say invade, your enterprise network, it is more important than ever to plan and design your network to carry video. And just as the teenage years pass, the video phase will also pass in time, allowing networks to again hum along in a predictable pattern. That is, until the next disruptive technology come along! In next week’s blog, we’ll be providing some best practices on designing, monitoring, and managing your network to help that teenager grow up.

Don’t Let the Network Get the Best of You: Take a Proactive Approach

In our last post, we discussed research conducted by Jim Frey from EMA on what is hampering organizations from effectively managing applications and services: poorly documented or controlled changes to applications and infrastructure; poor coordination among support teams; and lengthy troubleshooting and root-cause analysis. If you are experiencing these problems, here are the top three strategies, defined both by EMA and WildPackets, that will take you from reactive problem-solving to a proactive performance assurance angle.

1. Application Performance Is King.

As a network professional, you need to know what is happening at the network layer, but the value that is most important and easily perceived by your users and the guys who sign your paycheck is in the application and service layer – i.e., are you quickly delivering information and results over the network?

Having visibility into your applications is key if you want to quickly troubleshoot and solve issues when they arise. As a network engineer, request tools and develop processes that:

  • Protect the most important applications and services
  • Prioritize actions based on impact
  • Recognize new traffic contributors/aggravators and their sources before they become an issue
  • Find tools that have enterprise-wide visibility– visualize all applications on your network – and use
    them 24×7

You may need a mix of application-aware instrumentation, from SNMP, flow-based monitoring, packet-based monitoring,  and synthetic and passive agents to cover all areas of your network. WatchPoint 2.0 is an excellent solution since it combines SNMP, flow-based monitoring, and packet-based monitoring in one package to deliver a more comprehensive management solution and keep costs down.

2. Manage from Cradle to Grave.

There is value to be gained by moving the typical monitoring, baselining, and characterization approaches that are used during production earlier into the application rollout process. This will help you better understand what impact new applications will have on your system.

For example, take a VoIP project you may be starting/deploying. Before implementation, you need to establish a baseline of your current network performance, including numbers of users over time, peak usage times, average and peak latency measurements, etc. Networks have rhythms, so it’s best to assess network behavior over a long period of time, at least for several weeks and perhaps even for a month. Organizations can start this process by looking at their Internet connections, WAN links, WLAN environments, and data centers. We suggest you look into network analyzers to help you baseline.

And of course it’s important to continue to monitor and baseline your network after you roll out your new VoIP deployment so you can quickly see whether or not the impact it has is consistent with your predictions.

3. Take a Proactive Approach to Troubleshooting.

Most people consider troubleshooting to be a reactive approach, but troubleshooting can be proactive as well. Proactive troubleshooting implies that constant and comprehensive monitoring is in place so that when errors arise they can be solved immediately, before they become major problems.

It still surprises me how many enterprises invest in network monitoring and analysis solutions that are designed to operate 24×7, constantly analyzing the network for faults and providing up to the minute network statistics, only to use these solutions in an entirely reactive way – only after a network problem has been reported. You’ve already made the investment; why not leave that highly capable network monitoring and analysis solution running and let it provide ongoing analysis, 24×7, in the background on your system, always ready to alert you to issues on your network? In other words, use these solutions for proactive troubleshooting. For example, OmniEngines and Omnipliances have a whole series of Expert events running in the background, ranging from  layer 2 to layer 7 analyses. When an error occurs, you are automatically alerted and provided with information to isolate and solve the problem immediately.

A proactive approach is the key to successful network management. Proactive analysis includes baselining your network before new applications and technologies are deployed in order to see exactly how they affect your network and whether or not the impact is as predicted. Proactive analysis also includes leveraging the full value of the network analysis solutions that may already be sitting on your shelf. Don’t let them sit idle! Plug them in and use them 24×7 to provide ongoing Expert analysis and alerts the instant that trouble begins brewing. Taking this approach will make your end users forget all about you, and in network management that’s a good thing! Just make sure the guys who sign your paycheck don’t forget about you…

How to Better Manage Applications and Services on Your Network

In our “Best Practices in Enterprise-wide Network Performance Management” webinar that we co-hosted with EMA research analyst Jim Frey, Jim provided some key insights into how to better manage applications and services from a network perspective, based on a recent report by EMA. The key research areas included an assessment of factors that hamper organizations’ effectiveness in terms of managing applications and services, how these organizations look to address these issues, and EMA’s opinion on how to approach and solve the identified problems.

Here is what the study found:

What is hindering an organizations ability to manage applications and services?

  • Changes to applications and infrastructure are not well documented or controlled.
  • Poor coordination exists between support teams.
  • Troubleshooting and root cause analysis are taking too long.

What top three products and/or functionalities do organizations currently lack?

  • Consolidated event correlation
  • Change tracking, verification, audits to better understand changes in the infrastructure
  • Better tools for transaction management, problem identification, and root cause analysis

Not surprisingly, these questions have very similar answers, and certainly reflect the basic problems currently being experienced in enterprise network management – poor communication (probably due to understaffed IT/network engineering departments), poor recording, and lack of appropriate tools, especially those for root cause analysis. The last point is of particular interest to us, since root cause analysis is our business, and we know we can solve this problem for every enterprise network. It’s like dependency, the first step is to admit you have a problem, which apparently enterprises are now doing. And the best part is that you can do much more than address just root cause analysis with a deep packet inspection network analysis solution. You can also provide summary-level statistics for network reporting and monitoring as well as record network activity for forensic analysis and compliance verification.

Next week we’ll discuss the top three strategies for “Proactive Performance Assurance” as defined by EMA to better address the management of applications and services on the network.