Networks typically run various applications, from single-tier, locally-hosted applications like e-mail, to multi-tier web-based applications, or even time-sensitive, multi-hop applications like VoIP. While application traffic typically resides within the data center, SaaS and cloud computing are rapidly growing trends that are driving application traffic outside of the traditional enterprise network, making network latency even more of an issue. Pinpointing and correcting slowdowns is therefore a necessity, and can be a real challenge. So what is responsible for the latency creating poor application performance- the network or the application itself?
First, we must distinguish between the two basic types of latency – network and application latency. Network latency is the amount of time it takes for an application to make a request and the server to respond with an acknowledgement (a packet message used in the transmission control protocol to acknowledge receipt of a packet). Application latency is the amount of time needed for the application to process the request and send a response containing real data.
Most network-monitoring products provide some sort of latency-monitoring features, typically either one or the other, not both. Here are three tips to determine whether latency is be caused by the network or the applications (or both) in your environment.
1. Clearly determine network versus application latency
Every application issue is blamed on the network until proven otherwise – “guilty until proven innocent.” Clearly measuring network latency vs. application latency is the proof the network engineer needs for acquittal. Packet-level monitoring is ideal for accumulating evidence. By visually inspecting a packet-level conversation between a client and a poor performing application, one can see whether the network (or a network device) is the source of the delay, or if the application is the bottleneck. This is done by comparing the responsiveness of the TCP ACK to a client request versus the application response, which includes actionable payload data. Quite often the network acknowledges the client request quickly (within milliseconds) while the application may take tenths of seconds or even seconds to respond with payload data. When you see this, you know it’s the application causing the problem.
2. Periodically test and monitor key applications and network connections
Periodic, active monitoring can also provide insight into network performance on key interfaces, and can alert you when conditions begin to degrade. While this technique only addresses network latency (not application latency), it can still provide important data when determining whether the issue is network or application related. For example, let’s say the organization’s CRM application is via a web-hosted service. Running periodic traffic in the background (even just pings) to the CRM application host can provide an ongoing baseline of the performance of the network between users and the host. If the baseline increases, alarms are used to notify that the network latency is becoming an issue. User complaints about CRM performance without a marked change in the network latency baseline almost always indicate the application host or the application itself is at fault.
3. Graphing latency over time
Graphing latency over time helps to identify patterns and anomalies that deserve closer attention. Latency monitoring can help correlate areas of latency with other relevant statistics, as well as the actual network traffic occurring at that time. This type of high-resolution forensic analysis can help to detect latency problems at the highest level and drill down quickly for closer inspection.
Ideally, network latency and application latency measurements can be graphed together over time, making clear whether the problem lies with the network or the application. Comparing the measurements of the two types of latency over time and seeing the differences can provide information that might have otherwise been overlooked.
Latency monitors can include a feature that sets thresholds on latency, so alarms will go off when normal conditions are exceeded. The administrator can be made aware of excessive latency before application performance becomes a widespread issue, allowing him or her to make necessary adjustments to the network proactively. This type of proactive latency monitoring allows the administrator to detect and correct problems in the network and applications before users even notice a slowdown.