Category Archives: Network Performance Management

Faster Networks Are Coming: Will Your Data Center be Ready?

The growth of video, mobility and cloud services is pushing an increasing amount of traffic over enterprise networks, and many of the 10G networks that were performing well few years ago are reeling under the strain. Now, 40G and even 100G networks seem ready to take over; in fact, according to a recent report from Infonetics Research, 40G networks are poised to take the crown from 10G as the new high-growth segment. The report states that buyers are shifting their purchases toward network equipment with advanced capabilities to deal with the increasing demands applications are placing on network infrastructure.

According to Infonetics analyst Matthias Machowinski, “40G alone will easily pass $1 billion in revenue this year as it becomes the technology of choice in the data center.” Furthermore, Infonetics forecasts the 100G market will gain critical mass as soon as 2015.

This need for speed presents an inherent problem for enterprise network administrators: Troubleshooting problems at higher data rates can be difficult, and oftentimes performance problems and outages can take longer than usual to resolve. Consequently, traffic—as well as business operations that depend on that traffic—can be impacted.

Enterprises need network monitoring solutions that give them clear visibility into their traffic, regardless of data rate. Before migrating to 40G and 100G networks, network administrators need a comprehensive plan for network performance monitoring that includes bandwidth monitoring and traffic analysis. With a plan in place that provides an increasing amount of network visibility, enterprises can:

  • Analyze traffic performance and traffic patterns
  • Verify and troubleshoot transactions
  • Spot and resolve issues with VoIP or video traffic
  • Recognize and troubleshoot security breaches

The need for 40G in the data center is coming on strong and 100G is right on its tail. For more information on how we help enterprises tackle monitoring on these high data rate networks, visit our High Speed Networks page.

The Key to Rapidly Troubleshooting Network Performance Issues

Today’s networks are becoming faster and faster to accommodate the increasing demands of service and application growth, making network and application performance monitoring and troubleshooting essential, yet very challenging. Not only are organizations struggling to keep pace, but they are finding that visibility into the traffic traversing the networks is steadily decreasing.

To address this lack of visibility, organizations must implement network monitoring and analysis solutions with detailed troubleshooting that are compatible with high-speed networks. Oftentimes, the statistical data used to compile monitoring dashboards and reports common in today’s flow-based monitoring solutions are insufficient for performing detailed root cause analysis, driving network engineers to use multiple products from multiple vendors to perform different levels of analysis. This significantly increases the cost for IT departments to do business, in a time when budgets are already razor thin.

However, organizations can meet this challenge by implementing tools that scale to 10G+ networks and are built with more powerful analytical platforms capable of handling the massive increases in transactions and data traversing the network.  In addition, these tools must be able to provide real-time feedback on overall network performance, so the data is always available for detailed, packet-based analysis.

WildPackets’ Omnipliance family of network analysis and recording devices includes each of these features, and can provide the necessary visibility on all network segments at 10G, 40G and even 100G. Join us on Wednesday, April 16, 2014 at 8:30am PT for a webinar that will discuss how to increase visibility into higher-speed networks. Register here.

Don’t Let Legacy Tools Get You Down

Most business owners and tech experts will tell you that we are in the midst of an exciting business technology revolution that is giving enterprises access to many tools that are pushing the industry forward in ways previously unimaginable. Between cloud computing; increasingly fast networks; and unified communications platforms that converge multiple platforms into a single system, business are discovering both enhanced capabilities and new technical complexities.

One of the major problems that IT managers and network engineers across various industries are encountering is the adoption of high-speed networks, which brings along with it a sharp increase in bandwidth demand, leaving the networks’ legacy monitoring and analysis equipment unable to handle the increased traffic speed . In the sixth annual “State of the Network Global Study,” conducted by Network Instruments, half of survey respondents said that they expected bandwidth demand to increase by 50 percent over the next two years. While some businesses are still looking to switch from a 1G to 10G network, others have already or are planning to move up to 40G in the next 12 months.

Unfortunately, in many cases, network engineers only have access to legacy tools designed for 1G rather than 10G or 40G, which can cause major problems. In a Wildpackets survey, 92 percent of IT managers and network engineers said that their companies have at least 10G networks, but a little less than half of the companies that employed such networks were still using legacy tools for analysis.

If your company isn’t giving your engineers the tools they need to monitor attacks or network slowdowns, increasing your bandwidth won’t necessarily solve any of your problems. Upgrading your speed alone isn’t enough—your IT team has to be able to keep it running fast. So if you are one of the many businesses that has moved—or is moving—to 10 or 40G, make the investments necessary to get the most out of your network. Don’t let legacy tools get you—or let you—down.