Like driving down the highway, every driver is different when it comes to the speed at which they feel comfortable driving. Weather conditions, mood or simply time can cause the driver to adjust their speed. The same is true with applications and the time users are willing to wait for them to upload. Some users don’t mind a long wait time to download a report, but they will feel frustrated when a seemingly instantaneous application is taking a few seconds longer to boot. Application Response Time (ART) is really about user experience, and the ways in which to properly measure overall happiness are many and depend on individual preference.
However, when a new application is rolled out on the network, IT must be able to predict how it will perform for the business, and one of the key performance metrics for application performance is latency. Furthermore, existing mission-critical applications must not be negatively impacted by new applications. It’s important that network engineers are able to assess things like latency, both from the network and the application itself, and provide immediate feedback on how new applications behave on the network.
Here are three keys to determine whether latency is caused by the network or the applications in your environment.
1. Application vs. Network latency – you mean there’s a difference?
It seems as though every application issue is blamed on the network until proven otherwise. Clearly measuring network latency vs. application latency is the proof the network engineer needs. Packet-level monitoring is ideal for accumulating evidence. By visually inspecting a packet-level conversation between a client and a poorly performing application, one can see whether the network (or a network device) is the source of the delay, or if the application is the bottleneck. This is done by comparing the responsiveness of the TCP ACK to a client request versus the application response, which includes actionable payload data.
2. Test and monitor key applications and network connections
Active monitoring can provide insight into network performance on key interfaces, and can alert you when conditions begin to decline. For example, let’s look at the example of a web-hosted CRM application. If the network admin runs periodic checks in the background to the application host, it can provide an ongoing baseline of the performance of the network between users and the host. If the baseline increases, alarms are used to notify that the network latency is becoming an issue.
3. Keep track of latency over time
Graphing latency over time helps to identify patterns and anomalies that deserve closer attention. Latency monitoring can help correlate areas of latency with other relevant statistics, as well as the actual network traffic occurring at that time. This type of high-resolution forensic analysis can help to detect latency problems at the highest level and drill down quickly for a closer look. Ideally, network latency and application latency measurements can be graphed together over time. Comparing the measurements and seeing the differences can provide information that might have otherwise been overlooked.
In the end, network engineers are a bit like defence lawyers: constantly having to make a case that the network is not the one causing slow ART. Of course, the network is sometimes at fault, but it can also easily be the application designer who is to blame.