pointer

Best Practices for Network Management in the Era of Distributed Applications

A network error just occurred in your environment and you (the network engineer) have about two hours to fix it before your entire company is breathing down your neck. About 10 years ago, when centralized computing was all the rage, the ability to fix this problem was very simple: you physically went down the hallway and connected a network analyzer to the appropriate port to troubleshoot the issue. However, this is no longer the scene. Whether you are an organization of 50 or 50,000 employees, most likely your network environment is highly distributed. Imagine sending your network admin to China to fix a problem in that data center. It’s simply unrealistic.

Today’s distributed application architecture takes many forms, from locally-hosted applications to web-based applications to multi-tier, third-party hosted applications. While application traffic has historically resided in the data center, like the scenario above, SaaS and cloud computing are driving application traffic outside the traditional enterprise network—making the ability to determine network and performance issues far more challenging.

No two networks are the same, with topologies depending on many factors, but most networks can be characterized using similar metrics. These metrics can be used to help you plan for a holistic solution that will best monitor and analyze your entire environment. Below are some key tips to getting started, or for reevaluating your environment to monitor and analyze distributed applications.

Understand the Application “In-Action”
What we mean by an application “in-action” can also be called application response time, which is the time it takes an application to respond to a specific user request. This is a real world measurement – it’s not just the time it takes the network to respond, but the time it takes for the user to receive actionable data in response to their request so they can move onto other tasks. It’s important to understand how long this process takes, as this is how the user measures the application response. The overall process can of course be broken down to a much more granular level using packet-based network analysis, and from there you can begin to determine if it is the network itself or the application that is causing any delays. If you are interested in more detail on this, check out our blog “Three tips for determining whether latency is caused by the network or application.”

Where Should You Monitor?
The most useful data comes from network monitoring at both the client and the server. Measuring at the server provides accurate Server Response Time (SRT) – the time it takes just the application itself to respond to a request. Measuring at the client side includes both Server and Network Response Times (NRT), but if you’ve also measured at the server the SRT can simply be subtracted to get the true NRT. If you decide to monitor only on the client-side, you’ll get a good picture of what the client is experiencing, but the latency measurements will include both the SRT and the NRT, obscuring critical information about where the problem lies. It also requires monitoring equipment at all client sites, even remote sites, which can drive up the cost.

Most enterprises employ server-side monitoring, which is usually less expensive and better able to assess server response time. However, it hides the NRT, especially for remote users, making measurements to differentiate individual client locations more difficult.

Monitor 24×7 and Always Store Your Data!
Ongoing monitoring can provide insight into network performance on key interfaces, and can alert you when conditions begin to decline. However, before you can tell if network and transaction latency are snowballing, you must have an understanding of what normal means for your network.

Benchmarking your application response time provides you with details of how your application regularly performs, so if it is not performing up to par you can catch it right away. Remember when you are establishing an application benchmark, pay close attention to both NRT and SRT and assess whether or not they appear across a wide range of users (especially in different locations) and/or a wide range of applications.

As network speeds grow, it is easy to miss things on the network, so whether to respond to user reports or simply to explore something that just flew by in a bit more detail, it’s important to store your data for historical analysis. Again, packet-based data is the best approach, as flow-based data which is often used for network monitoring does not contain sufficient detail to pinpoint exactly where the issues are and prevent them from happening in the future.

The Cloud Factor
From a network perspective, whether your application server is located in your data center or in the cloud, the only thing that’s changed is how to manage application problems that occur. You’ll be shifting from managing your own infrastructure to managing service availability and performance.

Keeping up the performance of your applications is essential to your business, and businesses must be able to adapt their network monitoring and management strategy accordingly in this new distributed environment.

One thought on “Best Practices for Network Management in the Era of Distributed Applications

Leave a Reply