Menu
SDN: The core building blocks

SDN: The core building blocks

When getting to know software defined networking, you'll encounter a number of terms that are used in conjunction with the technology. Some of the terms are unique to SDN, while others describe technologies that, while not unique, are frequently used in SDN designs.

It's helpful to have an understanding of these terms and their context. We'll take a look at three basic terminology categories as they relate to SDN: controllers, switching and overlay networks.

Controllers

One of SDN's big ideas is that a device called a controller talks to all of the network devices in a domain, learns the network topology, and programs the network from a point of central omniscience. An SDN controller shifts the model of network programming from distributed (network devices communicating with each other to determine forwarding paths) to centralized.

Central programming of the network is the significant value that a controller brings to a business. Conceptually, a controller can be used to deploy business policies to a network holistically and in a device-independent way. The controller acts like a layer of network middleware that abstracts the underlying physical network components such as switches, routers, firewalls and load-balancers.

With an SDN controller programming the network, operators are no longer in the position of having to program the network devices individually through traditional means, such as the command-line interface. In addition, unique network forwarding paradigms can be created based on criteria such as the dollar costs or security policy requirements.

A controller accomplishes this network programming via software, and it is in this software that SDN's promise of flexibility comes. The controller is a platform on which software is run, as well as being a communications gateway that software can communicate through. Most controller architectures are modular, allowing the controller to communicate with different kinds of devices using different methods as required.

Thinking again about an SDN controller as middleware, there are two directions of communication implied. The most discussed to date is southbound communications. When a controller is programming network devices and receiving data from them, this is known as southbound communication. An example of southbound communication is the controller programming network switch forwarding tables using OpenFlow, which we'll discuss more. The other direction is northbound. Communications between applications that wish to program the network and a controller are described as northbound. An example of northbound communication is an application like VMware's vCloud Director requesting network provisioning services via a controller.

Switching

When it comes to SDN, perhaps the most talked about device is the network switch, Ethernet switches in particular. For years, Ethernet switches have been increasing in speed and density, providing data centers with uplinks for their hosts, blade centers and Ethernet storage. With the advent of server virtualization enabled by hypervisors, the software switch has also become significant, plumbing virtual servers to virtual network interface cards, aggregating traffic and sending it out of the hypervisor to the physical network.

Both the hardware and software switch have significant roles to play within SDN, as it is chiefly their forwarding tables that are being programmed by a controller. Considering that soft switches reside at the network edge, the concept of a "smart, soft edge" has arisen.

Network designers that advocate for a smart, soft edge feel that the software switch running on a hypervisor is a good place to install rich network functionality, leaving the physical hardware switches to run a simpler configuration. In a smart, soft edge SDN design, controllers apply forwarding, QoS and security policies in the network's soft switches.

For example, the soft switch could have access lists, QoS parameters for rate limiting and traffic prioritization, and forwarding intelligence applied to virtual ports. By the time network data has left the hypervisor, it has already been tested for security compliance, rate-shaped and encapsulated (if required). Placing all of these functions at the network edge allows core hardware switches to focus on rapid transport of traffic.

Not all networks lend themselves well to the smart, soft edge design, nor can all conceivable SDN use cases be met by a soft switch. There's still a role for SDN to play with hardware switches for tasks like end-to-end business policy deployment, traffic steering and security enforcement. In addition, there's still some amount of basic configuration to be done to a hardware switch, no matter how smart the edge network might be.

The primary southbound protocol used by a controller to program the forwarding behavior of both hardware and software switches is OpenFlow. OpenFlow (OF) is a protocol whose standard is undergoing rapid development by the Open Networking Foundation.

The ONF is a members-only organization made up primarily of networking vendors and service providers, and they operate behind closed doors. Their OpenFlow specifications are published when released. The OF1.0 specification is most frequently seen in production equipment; OF1.3 is the likely next step for most switch vendors. OF1.4 is under development at the time of this writing.

Keep in mind that while OpenFlow is implemented fully in software switches like Open vSwitch, OF has proven challenging to translate into network chips (ASIC) in hardware switches. While new silicon that can handle OF better is reportedly coming, customers evaluating OF's usefulness when combined with their existing network hardware must do thorough testing to be sure the required OF function will scale as much as needed to support their application.

For northbound communications, controllers are frequently offering APIs. A REST (representational state transfer) API is perhaps the most common. REST APIs exchange data and instructions much like HTTP servers, using familiar methods such as GET and POST. APIs provide a way for applications external to the controller to tell the controller what should happen on the network.

Notably, vendor-specific APIs have arisen in the southbound direction in addition to OF. This is due in part to OF's limited set of commands and sometimes-difficult implementation in legacy silicon. Despite supporting OpenFlow, Cisco is an example of a vendor emphasizing APIs via its ONE initiative, arguing that its APIs allow network programmers to take full advantage of the capabilities of their hardware.

Overlays

Another term that comes up frequently in SDN conversations is that of overlay networks. Simply stated, overlays are used to create virtual network containers that are logically isolated from one another while sharing the same underlying physical network.

Network engineers familiar with commonly deployed Generic Routing Encapsulation (GRE) will grasp the overlay concept readily. One packet (or frame) is encapsulated inside of another one; the encapsulated packet is forwarded to a tunnel endpoint where it is decapsulated. The original packet is then delivered to its destination. Overlays leverage this "packet in a packet" technique to securely hide networks from one another and traverse network segments that would otherwise be barriers. Layer 2 extension and multi-tenancy are popular use-case for overlays.

A number of overlay protocols have been released and promoted by standards bodies during the last few years, driven by a virtualized data center's ability to move a host anywhere at anytime. Some SDN controllers use overlays as their transport of choice to build a bridge between hosts scattered across the data center; soft switches usually serve as either end of the tunnel. Virtual eXtensible LAN (VXLAN) has the broadest industry support at this time, with Cisco, Brocade and VMware among others committed to the overlay. Termination of VXLAN tunnels in hardware is supported by switches from Arista and Brocade. Hardware termination of VXLAN underscores the groundswell of industry adoption, as overlays are usually terminated by software switches.

VXLAN encapsulates Layer 2 frames inside of a Layer 3 UDP packet. This allows hosts inside of a VXLAN segment to communicate with each other as if they were on the same Layer 2 network, even though they might be separated by one or more Layer 3 networks.

In addition, since VXLAN preserves the entire Layer 2 frame, VLAN tags are preserved, allowing for multiple Layer 3 networks to exist inside of a VXLAN segment. Customers (also known as tenants) inside the VXLAN segment see a network much like any they are used to, while the underlying network only sees VXLAN packets identified by a segment ID.

Each VXLAN network is identified by a segment ID in the VXLAN header; this ID is 24 bits long, allowing for 16 million tenants to share the same network infrastructure while staying isolated from one another.

VXLAN has been criticized for its reliance on IP multicast to carry broadcast, unknown unicast and multicast traffic originated inside of tenant networks. Many physical networks do not have multicast routing enabled, and engineers unfamiliar with multicast find it an intimidating tool to deploy due to its potential complexity. For this reason, some vendors using VXLAN as an overlay are deploying it with enhanced intelligence provided by an SDN controller so the need for multicast routing is obviated.

Similar to VXLAN, Network Virtualization with GRE (NVGRE) defines tenant networks using a 24-bit identifier, found in this case in the GRE header's key field. NVGRE is largely a Microsoft technology, and is the overlay of choice in Hyper-V.

NVGRE differentiates itself from VXLAN by not requiring multicast to carry broadcast, unknown unicast, and multicast between endpoints. Instead, the Windows Network Virtualization module (a Layer 3 switch) embedded in Hyper-V is pre-populated with all hosts-to-tunnel endpoint mappings by PowerShell cmdlets. This eliminates the need for flooding, as there's no such thing as an unknown endpoint in this approach.

Although VMware is firmly behind VXLAN, the overlay known as Stateless Transport Tunneling (STT) also came under the VMware banner in VMware's acquisition of Nicira. STT is a part of Nicira's Network Virtualization Platform and is notable mostly because the encapsulation format leverages a modern network interface card's hardware capability to take large blocks of data into smaller segments.

Called TCP segmentation offload, a TSO-capable NIC takes on the burden of segmentation, freeing up a server's CPU for other tasks. The future of STT is dubious, considering that VXLAN already has VMware's support as well as support from the wider industry.

Aside from VXLAN, NVGRE and STT, another developing overlay worth following is Network Virtualization Overlays (NVO3). NVO3 is being developed by an IETF working group. The NVO3 problem statements are similar to the issues addressed by the overlays already discussed; namely, traffic isolation, tenant freedom to use whatever addressing scheme they choose, and placing virtual machines anywhere in a network, without concern for Layer 3 separation found in the underlying core. How NVO3 will develop and what encapsulation will be used remains to be seen, but it's shaping up along use-case lines as submitted by NVO3 working group participants.

Conclusion

The three main terminology categories we've discussed can be brought together as: An omniscient central controller discovers the network topology of network switches, whether they are software switches in a hypervisor or hardware switches found in a data center rack.

This central controller acts as middleware between applications in a northbound direction and switches in a southbound direction. The northbound applications articulate business policies, network configuration and the like to the controller; the controller translates these policies and configurations into southbound programming directives aimed at network switches.

The southbound protocol most often used is OpenFlow, but challenges retrofitting OpenFlow to existing network hardware has led vendors to promote network programming via APIs.

On this platform of network programmability and physical device abstraction is added overlays. Overlays allow cloud providers and enterprises that wish to support multitenancy to securely separate their customers' traffic one from another, while at the same time allowing their virtual hosts to reside anywhere within a data center.

ethan.banks@packetpushers.net| LinkedIn | @ecbanks

Read more about lan and wan in Network World's LAN & WAN section.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags NetworkingOpenFlowEthernet SwitchLAN & WANOpen Networking FoundationSDNsoftware defined networkingCisco ONEnetwork switchoverlay networkSDN controllerVirtual eXtensible LAN

More about AristaAustralian Securities & Investment CommissionBrocade CommunicationsCiscoIETFLANMicrosoftNICVMware Australia

Show Comments
[]