pointer

Tag Archives: Packet Capture

The Challenges that Arise in Monitoring 802.11ac Equipment

In our last blog post, we wrote on the Wi-Fi Alliance’s 802.11ac certification program and how it helps both the consumer of wireless equipment as well as the distributor of wireless equipment.

This week, we want to focus on the process of monitoring your new 802.11ac equipment once it’s installed. Before you can monitor 802.11ac traffic, you need to be able to capture the traffic. This is true regardless of the monitoring solution used, and capturing 11ac traffic remains one of the biggest challenges with this new technology.

Let’s take just a moment and step back to see how this has been done with 802.11a/b/g/n. Wireless LAN monitoring did not come without its struggles in these bands either, but over time the industry settled into a comfortable solution using laptops with 802.11 USB WLAN adapters. This solution is highly portable, making on-site WLAN analysis easy, regardless of the location. Most solutions work with only a subset of available USB WLAN adapters, and some require a custom adapter. In either case, the key requirement is that the WLAN adapter be able to be put into “promiscuous mode,” sometimes called sniffing mode. If this capability is not exposed in the device, then it has no chance of being used as part of a WLAN network monitoring and analysis solution. This combination remains as the “go-to” solution for most WLAN analysts, both in the field and in corporate WLAN environments.

So, what’s changed in 802.11ac? Let’s take a look at a few key differences between 11ac and previous technologies that are making WLAN monitoring and analysis more difficult.

Timing/Availability
Let’s face it, 802.11ac is brand new, and with that comes all the issues of timing and availability that may not always coincide with a perfect and logical order. Hardware vendors need software to test the new devices. Software vendors need hardware to test their software before they can provide it to hardware manufacturers. It is a classic “chicken and egg” scenario. As we described earlier, WLAN analysis software needs supported WLAN USB adapters to collect data for analysis. Without supported adapters, the software cannot be developed and tested, at least not fully. This is starting to ease, as more adapters from more vendors are starting to hit the market, some of which are compatible with promiscuous mode and can be used for wireless packet capture. As with previous 802.11 specifications, the situation will improve, but it’s always a painful process at the beginning.

Breaking the “Gigabit Barrier’
802.11ac is the first wireless standard to break the “gigabit barrier,” delivering wireless connectivity at data rates in excess of 1Gbps (in some modes). If you recall, it wasn’t all that long ago (OK, at least for us old guys) that we were talking about 1Gbps wired speeds. Breaking through this barrier requires some adjustments in the way we capture and analyze wireless data. First, back to our USB WLAN adapters. The key word here is “USB.” The laptops most of us have probably support USB v2.0. That means a maximum theoretical throughput of 480Mbps, with a practical limit less than 300Mbps. The slowest typical 802.11ac connection will be at 433Mbps (1-stream and 80MHz bandwidth). So it’s pretty clear that USB v2.0 is not up to the task. USB 3.0 will do in most cases, but you need to make sure that both your laptop and your 802.11ac WLAN USB adapter (that is compatible with your analysis software) are USB v3.0 compliant. It’s looking like some hardware upgrades may be required soon…

And it’s not just about the devices. Network analysis at greater than 1Gbps can be very demanding in terms of CPU and memory, and the software itself must be up to the task. Many are not; we already know this. Fortunately our OmniPeek network analyzer is up to the task since it’s been doing multi-gigabit wired analysis for many years now.

There are alternatives to USB WLAN adapters for capturing data, and we highly recommend that users begin thinking in this direction. The best approach is to use an AP to capture wireless data. APs are typically more capable than USB adapters (think more streams and better optional feature support) so they are best suited to the task. If you use the same brand for capture that you plan to use to send data, then the feature set compatibility will be guaranteed. Again, just as with USB adapters, you need to ensure that the AP can be put into promiscuous mode. Most enterprise APs can (but not all), and most consumer-grade APs cannot. Remote Pcap support is the best bet for using an AP as a packet capture device. We know, this makes portability a bit more of an issue. You may need to find a plug and be a bit more stationary when capturing data, so you can plug in your AP “sniffer.”

Staggered Feature Roll-outs
The 802.11ac specification is still in draft form, and the biggest risk at this point is equipment that doesn’t yet support the feature set you expect, even if the feature is “required” by the spec. The Wi-Fi Alliance (WFA) has just started interoperability testing against the draft spec so this should help a great deal in eliminating the uncertainty, but until the specification is ratified any equipment you plan to buy to be used specifically as part of a WLAN analysis set-up should be carefully researched to be sure that it will meet both your immediate needs as well as those down the road after ratification.

802.11ac shows major promise in the industry, but getting yourself ready to be able to monitor and analyze the new equipment is a bit of a challenge. Patience is essential in this process, as is ensuring that the solutions and equipment you buy will be compatible with a long future of 802.11ac wireless analysis.

Where to Capture Packets in High-Speed and Data Center Networks

Network analysis changes dramatically as network speeds grow (10G, 40G, and up to 100G). From more packets to capture, to changing traffic patterns like East-West traffic among servers, network analysis strategies must adapt as new technologies are introduced. We’ve written in the past about best practices for network monitoring on high-speed networks. However, we have never gone into detail on how to position capture points to see inside areas that might be neglected by previous monitoring methods.

Below we go into where you should set up your capture points to get the most visibility on your high-speed network.

Capturing Data on the Network
Typically, if you are connecting directly on the network you are going to collect data through traditional SPAN ports, mirror ports, or taps. This is a well known method that is used frequently to obtain a passive feed from a network, so the process and any associated configuration should be familiar. One challenge does arise when virtualization is in use, as you will miss intra-host traffic when capturing only on the physical network. Don’t worry, below we will explain how you can capture this traffic as well.

In this video, you will see where to capture traffic if you are in the data center, a corporate campus, or remote office.

Capturing Data on vSwitch
Data on virtual servers pose a unique challenge, as oftentimes much of the data never leaves the virtual server – for example, communication between and application and a database running on the same virtual machine. In this case, capturing data off the span port of the virtual switch or hypervisor allows you to get visibility into intra-host traffic. To do so, you either need to have network analysis software running directly on the server, or you need a “virtual tap” (a piece of software) that can perform the function of a traditional hardware tap and copy network traffic off to a separate physical tap which can then be utilized in a traditional fashion. If you’re running the network analysis software directly on the local VM, remember to allocate enough memory, IO, and disk space to accommodate your network analysis needs.

Capturing Packets in the Cloud
Cloud computing comes in many shapes and forms. If you are trying to capture data in a private cloud, the practice and procedure will be similar to that of capturing on your vSwitch. If you control the infrastructure, you can sniff anywhere. If you are a service provider, you need to carefully consider data access, data separation, and customer privacy issues.

If you are using a third-party cloud service, the ability to capture and monitor traffic is going to depend on the implementation. If you are running software-as-a-service (SaaS) from a provider, it will be hard to have sniffing rights, so your last point of knowledge about your traffic will be at WAN link. This will still allow you to obtain valuable analytics, like round trip latency, which will provide a good indication of the overall user experience. However, if users are experiencing latency and you think that it might be an application performance problem and not an overall network problem, then it will be difficult to analyze the situation. For example, a database connection issue or database contention may be very difficult to troubleshoot. But then again, isn’t that why you’re paying your SaaS provider?

If you are employing infrastructure-as-a-service then you will have the ability to sniff your own traffic by installing a network analysis software probe on the hosted virtual server to see all the traffic on the virtual  server, thereby restoring your ability to analyze application issues that may otherwise be hidden.

If you are working within another environment and would like tips on capturing data, please leave us a comment.

Why Packets Count

A packet is the elemental source of data on a network. Packets are self-contained, including not only the information that is to be communicated, but also the routing instructions, addresses, protocols, etc. that allow the information to be correctly delivered, and possibly acknowledged. Packet delivery on a network is a lot like a letter in snail mail, with the contents of the envelope being the information to be transmitted, the envelope containing address and routing information, and the stamp describing the protocol to be used in delivery – first class, signature required at delivery, acknowledgement of receipt, etc. Also, just as with snail mail, issues can occur in transit that either damage the letter, or as many of us have experienced, prevent the letter from being delivered at all.

Extending this analogy, the contents of a letter, or in the case of packets the payload, are really of no concern to the Postal Service, and typically remain hidden inside the envelope. All the Postal Service needs to know is the address and the mode of delivery. They then determine the routing, without concern for the contents, until delivered to the final recipient. Only then do the contents (payload) become important.

So, what happens when something goes wrong? How would you even know it went wrong? Well, in the case of network analysis, this is where protocol and packet analysis become important. Often used interchangeably in the industry, protocol and packet analysis are in fact quite different. Protocol analysis relates strictly to the routing and delivery of packets. In terms of our Postal Service analogy, protocol analysis is performed using only the envelope – the contents of the envelope are never analyzed. Packet analysis goes one step further, performing both protocol and packet payload analysis.

Network engineers are typically concerned with only the delivery of data on the network, and not the data itself, so why worry about packet analysis at all? Isn’t protocol analysis sufficient? Well, in many cases yes, so it’s the best place to start. But even our simple Postal Service analogy can once again help us in understanding why packet (payload) analysis is important. Let’s say you’re communicating with a family member using letters (yes, the world used to communicate in this highly archaic manner just a few short decades ago), and suddenly your weekly communication is disrupted. Why didn’t you get the response you were expecting? Well, with protocol analysis we can see that (a) no communications have been processed that were addressed to you and (b) your “pen pal” recently received additional communications addressed to them. Even though you can see from the return address on the envelopes that one of the correspondences was from the Publishers Clearing House, you assume that’s just the typical junk mail and has no bearing on the delayed response. But let’s say you can see the contents of all of the letters as well (packet analysis), and that the letter from the Publishers Clearing House was not your typical junk mail – it was a notification that your “pen pal” just won $10 million dollars. With that bit of information the source of their distraction becomes clear, and it’s easy for you to understand why you didn’t hear back in the typical response time.

The above example made one basic, yet often unstated, assumption about our ability to even analyze the Postal Service (network). It assumed we had the proper tools in place, and were using them 24×7, to perform ongoing, real-time analysis, all the while archiving our network data so we could go back and analyze it as necessary. As in our example, it isn’t always immediately obvious that something did or did not happen as expected. You need constant monitoring, with ongoing data storage, so you can analyze the problem as it happens, not wait for it to happen again and hope you catch it. Way more network analysis time is spent trying to reproduce problems rather than solve them. Ongoing network monitoring based on packet analysis gives you all the information you need to analyze your network at its most elemental level.

So, the next time you sit down for some quality network analysis time, take a quick look at the packets per second. And then imagine that many little letters being delivered each second via your network infrastructure. At least for me, it provides a whole new perspective on just how hard my network works, and just how important my network monitoring and analysis software is to me when something goes wrong.