pointer

How to prevent the next Heartbleed incident

What would you do if you found out your network had been compromised by a vulnerability that allowed a hacker to gain access to your users’ most sensitive information – passwords, stored files, bank details, even Social Security numbers – and worse yet, the security certificates you rely on to encrypt all of this data when it’s transmitted across the Internet? And what if these hackers were able to do so entirely unnoticed?

That is exactly the result of the Heartbleed bug – a security vulnerability in OpenSSL that gives hackers access to the memory on data servers – recently discovered by Finnish security researchers working for Codenomicon and security researchers at Google. Now websites and companies both large and small are working to update their software to patch the vulnerability, but its impact on the general public is still being assessed and the extent of the damage won’t be known for some time.

The chaos surrounding the vulnerability’s discovery continues to prove that despite the very best efforts of companies, computer networks will continue to be vulnerable to hackers because the potential financial gain for hackers is enormous. In addition, organizations are starting to recognize that security and privacy are no longer restricted to the IT department – they now affect everyone.

While the best technique to combat evolving security threats is vigilance, there are also tools available to help you gain additional visibility into your network. Network forensics works as a contingency plan in case a security breach does occur. It can help you clean up your network to make sure that there are no lingering infections or other suspicious traffic, and it can also help to determine where the hacker breached your network, allowing you to fix any security holes.

While most enterprises have solutions in place to store and subsequently mine log data over relatively long periods of time, it usually only provides reports of relatively high-level events and cannot tell you how something happened, only that it did. In the case of the Heartbleed bug, there may not even be any log information from security systems since the vulnerability can be exploited without triggering any alarms at all. However, a network forensics solution can provide a recording of many days or even weeks of network activity, making the task of determining the fingerprint of the attack, the depth of the penetration, and the data that was compromised much easier to assess.

Unfortunately, we now live in a world where events like the Heartbleed incident are becoming more and more common. As a result, we must be aware of the trends affecting the security industry (both big and small) and implement solutions such as network forensics to ensure security threats don’t compromise your users.

To read about more real-world examples of how network forensics can aid your organization in determining the effects of security threats, read our white paper, “Real-World Security Investigations with Network Forensics.”

The Key to Rapidly Troubleshooting Network Performance Issues

Today’s networks are becoming faster and faster to accommodate the increasing demands of service and application growth, making network and application performance monitoring and troubleshooting essential, yet very challenging. Not only are organizations struggling to keep pace, but they are finding that visibility into the traffic traversing the networks is steadily decreasing.

To address this lack of visibility, organizations must implement network monitoring and analysis solutions with detailed troubleshooting that are compatible with high-speed networks. Oftentimes, the statistical data used to compile monitoring dashboards and reports common in today’s flow-based monitoring solutions are insufficient for performing detailed root cause analysis, driving network engineers to use multiple products from multiple vendors to perform different levels of analysis. This significantly increases the cost for IT departments to do business, in a time when budgets are already razor thin.

However, organizations can meet this challenge by implementing tools that scale to 10G+ networks and are built with more powerful analytical platforms capable of handling the massive increases in transactions and data traversing the network.  In addition, these tools must be able to provide real-time feedback on overall network performance, so the data is always available for detailed, packet-based analysis.

WildPackets’ Omnipliance family of network analysis and recording devices includes each of these features, and can provide the necessary visibility on all network segments at 10G, 40G and even 100G. Join us on Wednesday, April 16, 2014 at 8:30am PT for a webinar that will discuss how to increase visibility into higher-speed networks. Register here.

Don’t Let Legacy Tools Get You Down

Most business owners and tech experts will tell you that we are in the midst of an exciting business technology revolution that is giving enterprises access to many tools that are pushing the industry forward in ways previously unimaginable. Between cloud computing; increasingly fast networks; and unified communications platforms that converge multiple platforms into a single system, business are discovering both enhanced capabilities and new technical complexities.

One of the major problems that IT managers and network engineers across various industries are encountering is the adoption of high-speed networks, which brings along with it a sharp increase in bandwidth demand, leaving the networks’ legacy monitoring and analysis equipment unable to handle the increased traffic speed . In the sixth annual “State of the Network Global Study,” conducted by Network Instruments, half of survey respondents said that they expected bandwidth demand to increase by 50 percent over the next two years. While some businesses are still looking to switch from a 1G to 10G network, others have already or are planning to move up to 40G in the next 12 months.

Unfortunately, in many cases, network engineers only have access to legacy tools designed for 1G rather than 10G or 40G, which can cause major problems. In a Wildpackets survey, 92 percent of IT managers and network engineers said that their companies have at least 10G networks, but a little less than half of the companies that employed such networks were still using legacy tools for analysis.

If your company isn’t giving your engineers the tools they need to monitor attacks or network slowdowns, increasing your bandwidth won’t necessarily solve any of your problems. Upgrading your speed alone isn’t enough—your IT team has to be able to keep it running fast. So if you are one of the many businesses that has moved—or is moving—to 10 or 40G, make the investments necessary to get the most out of your network. Don’t let legacy tools get you—or let you—down.