Tag Archives: Network latency

IP Video – It’s like Living with a Teenager

Teenagers. Maybe you have one (or more) at home; maybe not. But we’ve all been one, so I know you can relate. Moody and unpredictable. Overly sensitive. Taking up more space than any human has a right to. High maintenance. They’re just so adorable.

Well, it turns out we have an exploding data type on our networks that behaves much the same way – IP video. In a recent whitepaper by Cisco, it was reported that all forms of video (TV, VoD, Internet, and P2P) will be approximately 90% of the global consumer Internet traffic by 2015. And per the report, that’s 90% of what will be 966 exabytes, or nearly a zettabyte, of IP data. To see what that looks like graphically, check out this link. Although video traffic on the enterprise side will not be as heavy as that on the consumer Internet, it will increase dramatically nonetheless, and will certainly be much more than 50% of the enterprise network traffic by 2015. It looks like you’re going to need both network management and high school guidance counselor skills by 2015 to manage enterprise networks.

With this dramatic increase in video traffic, video will be in competition with enterprise corporate data, enterprise application access, SaaS, and cloud computing. And given its tendency towards teenage behavior, you’re going to have your hands full. Below are a few details of how the characteristics of IP video can adversely affect your enterprise network.

Video is “bursty,” or in the teenage analogy, unpredictable, which is an undesirable characteristic for networks that work best under stable conditions – predictable and consistent. Packet sizes range all over the place, and often hit the network in large bursts. And of course these bursts are tagged with high QoS (quality of service) tags, so they take precedence over your other mission critical application data. Characterization of your IP video traffic, including weeding out business traffic from surfing, is critical to the health of your enterprise network.

Space Hog
Video is a bandwidth hog. One HD video stream can consume up to 20Mbps of bandwidth. So if five people are trying to stream a movie, it means that they are taking up 100Mbps of your network. This may not seem like a ton of traffic, but depending on the distribution of these users on your network, and the number of users serviced, bandwidth availability can certainly become an issue. And remember, the amount of video on your network is increasing all the time.

Overly Sensitive
Video is also very sensitive to latency, jitter and packet loss, even more so than voice, which we covered in this blog post. These sensitive protocols demand that your network is performing at its peak level to ensure that these issues are minimized. As video becomes more common on the network, performance demands will continue to grow and become harder to reach. Specific metrics and demands of latency, jitter, and packet loss are described in more detail below with this video segment and graph:

Due to the high performance demands of video, it is typically tagged for the highest QoS delivery as I mentioned earlier. However, as video traffic starts exceeding data traffic, enterprises will need to maintain different quality of service between users or video types since it is self-defeating for most of the traffic on a network to have the highest QoS tagging.

As video continues to grow, or as some might say invade, your enterprise network, it is more important than ever to plan and design your network to carry video. And just as the teenage years pass, the video phase will also pass in time, allowing networks to again hum along in a predictable pattern. That is, until the next disruptive technology come along! In next week’s blog, we’ll be providing some best practices on designing, monitoring, and managing your network to help that teenager grow up.

The New TimeLine 2 Network Recorder is Well-Suited for Telepresence

Today, WildPackets announced TimeLine 2.0, the second installment of our premier network recorder and the fastest continuous network traffic capture and analysis solution to offer detailed network and VoIP statistics in real time.

TimeLine sets a new standard in capture-to-disk speeds, offering unsurpassed network traffic collection and recording, quick data rewinding, simultaneous real-time network monitoring, and rapid search and forensic analysis of collected data. With TimeLine, network issues of any type can be identified, analyzed, reconstructed, and resolved quickly and efficiently.

So, what’s new in TimeLine 2?
With TimeLine 2’s ability to now display critical VoIP and network statistics in real time, including top nodes and protocols, call utilization versus network utilization, and call quality over time, it makes it easier than ever for network administrators to quickly pinpoint network issues. These capabilities are essential for businesses that are utilizing telepresence technologies and need to maintain network uptime.

Additionally, network information recorded with TimeLine can be seamlessly analyzed with the latest edition of WildPackets award-winning OmniPeek Network Analyzer. OmniPeek gives network administrators complete visibility into the health of their networks in real-time.

Advancements in the new version of OmniPeek include:

  • Compass, a new interactive dashboard that provides real-time visibility into key network statistics over long periods of time with on-the-fly data retrieval and complete OmniPeek analysis
  • Wireless 3-stream network analysis support for the latest 802.11n equipment
  • Call Data Records (CDR), which include key media quality and summary information
  • The ability to analyze over 2000 simultaneous calls to easily monitor

If you’re at Interop Las Vegas this week, come visit us at booth #639 – we’ll be happy to give you a brief demo of what TimeLine can do for you. If you can’t stop by, check out our TimeLine OnDemand Webcast titled “The Need for Speed – No More Compromises!”

Three Keys to Breaking Down Network and Application Latency

Like driving down the highway, every driver is different when it comes to the speed at which they feel comfortable driving. Weather conditions, mood or simply time can cause the driver to adjust their speed. The same is true with applications and the time users are willing to wait for them to upload. Some users don’t mind a long wait time to download a report, but they will feel frustrated when a seemingly instantaneous application is taking a few seconds longer to boot. Application Response Time (ART) is really about user experience, and the ways in which to properly measure overall happiness are many and depend on individual preference.

However, when a new application is rolled out on the network, IT must be able to predict how it will perform for the business, and one of the key performance metrics for application performance is latency. Furthermore, existing mission-critical applications must not be negatively impacted by new applications. It’s important that network engineers are able to assess things like latency, both from the network and the application itself, and provide immediate feedback on how new applications behave on the network.

Here are three keys to determine whether latency is caused by the network or the applications in your environment.

1. Application vs. Network latency – you mean there’s a difference?
It seems as though every application issue is blamed on the network until proven otherwise. Clearly measuring network latency vs. application latency is the proof the network engineer needs. Packet-level monitoring is ideal for accumulating evidence. By visually inspecting a packet-level conversation between a client and a poorly performing application, one can see whether the network (or a network device) is the source of the delay, or if the application is the bottleneck. This is done by comparing the responsiveness of the TCP ACK to a client request versus the application response, which includes actionable payload data.

2. Test and monitor key applications and network connections
Active monitoring can provide insight into network performance on key interfaces, and can alert you when conditions begin to decline. For example, let’s look at the example of a web-hosted CRM application. If the network admin runs periodic checks in the background to the application host, it can provide an ongoing baseline of the performance of the network between users and the host. If the baseline increases, alarms are used to notify that the network latency is becoming an issue.

3. Keep track of latency over time
Graphing latency over time helps to identify patterns and anomalies that deserve closer attention. Latency monitoring can help correlate areas of latency with other relevant statistics, as well as the actual network traffic occurring at that time. This type of high-resolution forensic analysis can help to detect latency problems at the highest level and drill down quickly for a closer look. Ideally, network latency and application latency measurements can be graphed together over time. Comparing the measurements and seeing the differences can provide information that might have otherwise been overlooked.

In the end, network engineers are a bit like defence lawyers: constantly having to make a case that the network is not the one causing slow ART. Of course, the network is sometimes at fault, but it can also easily be the application designer who is to blame.