pointer

Category Archives: network recorder

How to clean up an attack like Wells Fargo

In late March, news broke that Wells Fargo’s consumer facing website had gone offline due to a distributed denial of service (DDoS) attack. News outlets reported that the attack was conducted by hacktivist group Izz ad-Din al-Qassam Cyber Fighters. The group performed these attacks because they were upset about an anti-Islamic YouTube video. The group also claims that they’ll be performing these types of attacks to other banks such as Chase, Citibank, and SunTrust in the future if the video is not taken down. James Dohnert of V3.co.uk writes in more detail about the attack here.

Organizations both big and small have suffered from DDoS attacks in the last few months. Although DDoS are pretty “old school,” they remain highly effective in bringing down websites and web-based applications. The methods behind these attacks have become more sophisticated, with much greater horsepower behind the attacks, and much more obfuscation as to the sources. But network monitoring and analysis are also rapidly improving, offering new strategies to both protect your network and clean up your network if an incident occurs. Below we detail how you can be both proactive in protection and also reactive if you fall victim.

How to Protect Yourself
DDoS attacks are designed to block network access for legitimate users. These attacks create extremely large volumes of useless traffic, causing various network resources to become saturated thereby blocking access for users and customers. The attacks predecessor, Denial of Service (DoS) attacks, affected servers by using up resources signaling the start of a conversation with no intention to converse. To mitigate these attacks, you could use ACLs (access control lists) or firewall rules to keep the attack traffic from reaching the server. But with DDoS attacks, the first “D,” or the distributed nature of the attack, makes blocking offending traffic extremely difficult, and it broadens the scale of the attack from a few machines to a widespread attack from machines worldwide that have been infected by bots.

With today’s DDoS attacks it really comes down to network protection. It is therefore very important to:

  • Use network analysis tools to capture all the data in one place, although attacks come from a large number of IP addresses, these attacks are fairly homogenous in the IP layer. If you can find a common behavior at the packet level then you can filter out this traffic.
  • Set up alerts to isolate questionable behavior. If you are experiencing a request that requires more data than normal, or the number of users accessing your website suddenly spikes, it might be the beginning of a DDoS attack.

How to Clean-Up the Mess
Having a network recorder with network forensics in place is key to helping you clean up your system. Network forensics is the process of capturing and storing data packet-level network data 24X7 for analysis if a problem occurs. This process gives you a complete picture of the problem and allows you to gain crucial information, including exactly where, and how, the attack was orchestrated. Armed with this knowledge, you can build new rules for intrusion detection and prevention systems (IDS/IPS), or new alarms for the network monitoring and analysis solution, so you’ll be notified at the first sign of a renewed attack. If you are interested in learning more about network forensics, check out this Rich Report podcast featuring Jay Botelho, Director of Product Management at WildPackets, here.

DDoS attacks are on the rise and even large banks like Wells Fargo have trouble protecting themselves and reacting to these attacks. To help mitigate some of the headaches, have a game plan in place both for proactively stopping these attacks and cleaning up after these attacks if you are targeted.

10G Upgrades: Fact or Fiction?

When it comes to network bandwidth and data speeds, for the most part we are still living in a 1G to 10G-transition world. That’s not to say that no one has invested in 10G or even 40G, but the pool of enterprises that are moving in this direction is still quite small.

Why?

It is not a technology issue – there are plenty of products that are out there and ready to handle 10G. The issue is cost. Businesses just don’t need 10x the bandwidth and are not willing to pay 3x the cost. Instead they are opting for multiple 1G channels. The Dell’Oro Group recently published a report on the underwhelming statistics for companies moving to 10G.

There is, however, one common use case for enterprises moving to 10G. Below we illustrate this common use case, and some best practices that have stemmed to monitor and analyze data at 10G.

Common Use Case for Moving to 10G
One of our customers, a large manufacturing company, made the transition because of significant growth in their backup traffic. They were saturating 2x and 4x Gig channels when their backup would kick in at night so they started looking to virtualized architectures for help. After that transition, they needed 10G to support their virtualization server clusters, which had grown tremendously. Although this particular example is with a manufacturing company, this is one of the most common reasons we hear across many industries for initially making a switch to 10G – too much backup traffic, and not enough time.

What to Think About Before You Switch to 10G
Migration to 10G usually happens first at core switches and in the network backbone – and the same is also true if you’re already at 10G and planning a move to 40G. As we mentioned above, cost is a huge factor in folks holding back, and it is not only the equipment costs. Cabling, rewiring, and increased power consumption can all be issues, depending on the magnitude of the migration, so don’t forget to factor in ALL the costs.

Network Monitoring and Analysis at 10G
As part of your migration plan, remember to upgrade your network analysis and troubleshooting solution(s) as well. Network analysis at 10G is a completely different beast, and a beast it is! The days of point-and-shoot, real-time analysis go by the wayside at 10G. Network recorders are a must. Installed at strategic locations, like WAN links, data centers, etc., network recorders keep an ongoing record of all your network traffic. If you start getting alerts, or alarms go off, you simply need to rewind your network traffic to see exactly what the problem was. If a particular user is having problems or you need to retrieve packet-level data (network OR application) for a compliance investigation, again, the data is there and instantly searchable and retrievable.
With 10G 24×7 recording and monitoring are requirements. Simply connecting an analyzer after a problem is reported is just not viable – there’s just too much data, and trying to reproduce problems is a nightmare. You need to capture the traffic – packet-by-packet – so you can immediately recreate the conditions to analyze and solve the problem.

With 10G you also have to streamline and condense what you want to analyze. Attempting to monitor and analyze every type of data that streams through your network can be tedious and there is a key limiting factor – storage capacity for all that data. If you do not need to analyze VoIP data, for example, then take it out of the mix. It will reduce the overall storage required and typically increase overall speed of network traffic that can be processed and stored.

2013 might not be the year for you to transition over to 10G, but it is the year to start planning and thinking about what that transition will look like, especially if you are looking to move to a more virtualized data center. If you are interested in learning more about best practices for moving to high-speed networks and the new role that overlay networks play in this transition, check out our webcast “Packet capture in high-speed and data center networks.”

How to Store Your Network Data with TimeLine

Network performance across one of your 10G backbones just took a nosedive. What you’d really like to know is exactly what was going on before, and exactly when, the performance changed, but those network packets are long gone. Or are they?

Some network engineers, and certainly those who have been bitten by such a problem before, are employing network recorders, like WildPackets TimeLine, to constantly record network data at the packet level. With TimeLine you have a complete recording of the traffic on your network, even highly utilized 10G links, so in-depth analysis of situations that happened a few minutes, a few hours, or maybe even a few days ago is only a click away.

Capture Data without Losing It
With more and more traffic running over 10G links, performing real-time analysis is becoming very difficult, especially when you’re reacting to a nose-dive in performance and even the execs are aware of the problem. Perhaps it’s an intermittent problem – are you going to stick around all night waiting for it to happen again so you can capture and analyze the data? Probably, but you don’t need to. TimeLine records and stores each and every packet traversing a network link, up to 12Gbps, with zero packet loss, creating a complete archive of exactly what is transpiring on the network. No need to wait for the problem to happen again; no need to try to reproduce the problem, and in the process risk reducing network performance even further.

TimeLine is specifically designed to store massive amounts of packet data efficiently and without data loss, and to quickly find the data you need when a problem arises. Simply specify the amount of storage space that you want to allocate to the capture, based on the average data throughput and the amount of time you want data to be preserved, and TimeLine does the rest. Once the allocated space is filled, data will simply roll over, first-in-first-out, so you’ll always have data for the amount of time determined when making the storage allocation. Start a monitoring capture at the same time and TimeLine will send alarms to you based on your configuration, so you can be instantly alerted when the problem happens again.

The Longevity of Storage
Some problems may require relatively long storage periods. With up to 48TB of disk space, TimeLine can store nearly 11 hours of data at 10Gbps steady state (and no one pushes their 10G link that hard) or over 2 days at 2Gbps (much more reasonable). So, when that intermittent problem crops up, don’t worry. Just head home, and if you get an alert you can log into TimeLine from home, scan through the real-time statistics that TimeLine generates, focus in on the time frame (or IP address(es), or protocols, etc.) of interest and perform a forensic search directly on the TimeLine box. No additional strain on the network, and if your first search wasn’t exactly right, the data is still there and available for you to search again. And if you’re confident that you have captured all the data you need, you can stop the data capture to preserve your data for as long as you need to complete the analysis.

Let TimeLine Work for You 24/7
Monitoring, capturing, storing, and analyzing network data is – and should be treated as – a full time job, and TimeLine does that job for you. TimeLine will continue to monitor and document your network data even during downtime. Even when only a handful of users are working on your network, it could still experience a hack or a harmful outside threat, especially with the surge of BYOD. For this reason, keep TimeLine on and working so that the data capture is done and stored for you, in real time, 24×7.

If you need to view a particular timeframe, simply highlight that area on the TimeLine utilization graph, and you will see the remaining stats change to reflect that timeframe. While analyzing this specific selection, TimeLine continues to monitor your network data in real time, always making your job easier.