Software-defined networking (SDN) and OpenFlow were center stage this year at Interop Las Vegas alongside the technologies that helped form them: cloud computing and virtualization. Both of these technologies have revolutionized computing, but also presented potential challenges for the likes of storage and LANs. Both SDN and OpenFlow have been touted as enabling technologies that will help decrease the complexities of cloud and virtualization, but before we look at their potential, let’s take a deeper dive into the history, use cases, and the current state of SDN and OpenFlow.
History and Definition:
Andy Gottlieb, who writes for Network World, wrote a great article on the history and definitions of SDN and OpenFlow. To sum it up, Stanford and Cal Berkeley created OpenFlow as a programmable network protocol designed to manage and direct traffic among switches from an assortment of vendors. It creates a controller separate from the networking data plane and hardware. This controller uses its central view of the network to tell each switch where to forward the packets. Centralized multi-vendor control would provide easier network management of potentially cheaper switches without single-vendor lock-in.
OpenFlow is the most prominent example of SDN, which is a paradigm designed to increase network flexibility and control by applying orchestration of virtualization to networking. Virtualization already exists in networking using VLANs, but configuration of networking equipment still is primarily performed on each switch independently. Adding orchestration – centralized, holistic control – would provide the same power and flexibility in networks that turned large-scale virtualization into Cloud. Rather than make forwarding decisions based on limited local information, the most effective end-to-end path can be calculated with real-time information, like adding real-time traffic updates to GPS. Knowing where the congestion is, and how to avoid it, will result in greater efficiency and speed.
During Interop there were several sessions about OpenFlow and SDN, one in particular was based on how Indiana University is using OpenFlow to address some security issues that occur with BYOD as well as to control and manage their disparate network – here’s an article that discusses this in more detail. Another notable OpenFlow adoptee is Google. Google is currently using OpenFlow between their data centers to help decrease cost and provide for a more efficient and flexible network that speeds up the delivery of services to users.
The promise of SDN is to create a smart network which will monitor and reconfigure itself based on traffic demands. While current routing protocols have some of this capability, such as advertising costs for different links in the path, they lack the automated adaptation and reconfiguration. If a forwarding device starts to drop packets on an interface, SDN provides the possibility of the controller dynamically increasing the cost for that link to reduce the flow of traffic. A researcher at Stanford University has already designed a load balancer which uses that network self-awareness to apply better balancing algorithms based on network state, not just server state. The load balancer can leverage Access Control Lists (ACLs) and other filtering on the switches to cause different flows to take different paths, allowing active-active load balancing across the entire network while avoiding asymmetric routing, traffic tromboning, or other artifacts which reduce efficiency and increase the difficulty of monitoring.
After the first wave of OpenFlow deployments focusing on network performance, the next wave will likely focus on security. The ACLs on switches are useful for more than just path steering: they can also be used to control whether or not to forward the packets at all. Look for vendors to create a security-minded controller which applies a single unified security policy across the entire network, compiled and localized for each switch into specific ACLs. Each switch port will effectively become a firewall port, and the entire network would act as a single pervasive firewall. QoS could be expanded to take into effect not only priority of traffic, but also security level.
Assuming that OpenFlow reaches critical mass, iterative improvements could also enable traffic-aware applications. Global enterprises have a unique challenge in performing cross data center backups for Disaster Recovery (DR), since they have to find windows of time when the data centers are not being heavily used in any global office. A network-aware backup application could register itself with a controller to create a short-term network service-level agreement (SLA), so the network would apply QoS priority to the traffic during the backup window, helping assure that the backup completes in time. After the window closes, the SLA could reverse the priority, so the backup is allowed to finish without interfering with normal business traffic. Other possibilities include application-specific tuning: rather than tune the database for the network, the database could query the controller for the end-to-end traffic status, tune itself, and even create short-term SLAs for critical transactions lasting only a few minutes or less.
The Current State of SDN and OpenFlow
You can’t talk about SDN and OpenFlow without bringing the biggest market leader in terms of switches, routers and other gear that sends packets into the picture: Cisco. Although Cisco hasn’t given too many details about their SDN plan, they recently funded (with an option to purchase) a “spin-in” called Insieme to research and develop SDN solutions. Juniper has taken a similarly cautious approach, having released OpenFlow software for their equipment as open source. However, other companies have announced much more aggressive support, such as NEC and HP.
The timing is right for vendors to support OpenFlow in their equipment, as the Open Network Foundation (ONF) has just released version 1.3, with the stated goal of providing a stable release for vendors to implement. Major features include support for IPv6, as well as improved bridging between data centers.
OpenFlow and SDN could provide for new and more competition for Cisco in this market, however the hype around how it could be the demise of Cisco is simply, hype. Cisco is firmly embedded in many (if not most) companies’ infrastructure, so if those companies decided to deploy OpenFlow widely, it would mean losing the time and money that has already been invested, a forklift upgrade, and potentially a complete re-architecture. Mike Fratto of Network Computing said it this way in a recent article:
IT shops are conservative. They don’t want to monkey with the network running mission-critical services. They don’t want to monkey with part of the network running mission-critical services, for fear something will break. That’s a huge hurdle.
While there is no doubt that OpenFlow has huge potential, it potentially creates a lot of additional work for network engineers. Centralized control of a network sounds good in theory, but migration requires creating network-wide policies. It’s likely that we’ll hear about large “failed” OpenFlow deployments, where the amount of effort overwhelms the projected ROI. This will open the door to enterprise deployment of alternate technologies currently in Data Center Bridging (DCB), such as Transparent Interconnect of Lots of Links (TRILL) and Shortest Path Bridging (SPB). These two technologies are essentially replacements for Spanning Tree, with an emphasis on providing active-active use of all redundant links, allowing faster and more reliable networking simply by creating more links between equipment. They operate similarly to routing protocols, but on switches instead of routers. Because there is no controller, DCB techniques are a more cautious evolutionary advance than the OpenFlow revolution. By building on the existing decentralized paradigm, but teaching switches to intercommunicate like routers, DCB may be easier to deploy incrementally, requiring less re-architecture and creating less risk.
The challenge for OpenFlow now is to live up to the hype: deliver demonstrable performance improvements without requiring a forklift upgrade of the network core. While it’s exciting that OpenFlow has lots of potential, if it’s too hard to deploy, it will never truly leave the research environment where it was born.