Menu
The network’s role in improving application security, reliability and efficiency

The network’s role in improving application security, reliability and efficiency

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Access to data center resources needs to be fast, secure and reliable, a significant challenge for the data center network infrastructure which is tasked to adhere to the following principles:

* Deliver data center application resilience, high availability and fault tolerance.

* Secure data center applications by providing traffic inspection and security policy enforcement.

* Achieve maximum data center application delivery optimization and acceleration.

Such functionality can also be, and in fact at times indeed is, delivered by the applications without any special requirements from the network. Security services, for example, can be in the form of SSL encryption and host-level firewalls, while application resilience can be achieved through clustering and, finally, better written code can, to a certain extent, account for optimized and accelerated application delivery.

INVESTMENT: Strengthening your core with network infrastructure upgrades

Why then do we need the network to play a role? First and foremost, not all applications were designed and written with security, reliability and efficiency in mind. What's more, network delivered services scale better for larger environments and can complement server and application level functionality, rather than be orthogonal to it.

Let's look at how these services should be integrated into the data center fabric and some pros and cons of each approach.

The traditional model of data center service delivery relies on physical service appliances (or service modules) positioned adjacent to the data center Layer 2/Layer 3 boundary, which most often occurs in the Aggregation (aka Distribution) Layer. In fact, data center service delivery models have become so common that a new design layer simply called the Services Layer was introduced. Service appliances can operate in two main modes, routed or bridged, and while there are more variations of each, let's stick with the main ones:

Routed operation mode

In routed mode, service appliances behave like routers with the client-facing side and the server-facing side belonging to two different IP subnets. Traffic forwarded through the service appliance gets routed based on the ability to reach IP addresses while service functionality gets applied. Introduction of routed mode appliances into an existing topology often requires IP addressing changes to accommodate the requirement of having different IP subnets on client and server facing sides.

Servers can either be Layer 2 or Layer 3 adjacent to the appliances themselves, although with Layer 3 they are not really adjacent but rather reachable through the routed hop(s). In case of Layer 2 adjacency, service appliances become default gateways for the servers, while Aggregation/Distribution devices behave purely as Layer 2 switches for those respective server VLANs.

In case of Layer 3 "adjacency," when servers are separated from the service appliances by Layer 3 hop(s), Aggregation/Distribution Layer devices hold the traditional role of the default gateway and route, rather than bridge, the traffic toward the service appliances.

Service appliances most often maintain session state and require symmetric traffic flow between clients and servers; failure to do so breaks TCP session establishment. In the case of a single data center special care needs to be taken to force the returning traffic (from servers to clients) back to the service appliances rather than having Aggregation/Distribution Layer devices route the traffic directly towards the clients bypassing them.

In case of multiple data centers or cloud deployment, several service appliances pairs can exist and it is important to make sure the returning traffic reaches the service appliances pair which processed the "going forward" traffic, because this pair will have the state for these connections. There is no particularly elegant way of making it happen and tools such as Source NATing or policy-based routing have to be employed, unless you're willing to consider and experiment with LISP.

Another consideration in regard to Layer 3 "adjacent" service appliances is to prevent servers behind these service appliances from talking to each other, bypassing security policy enforcement, such as the case with firewalls and IPSs. In this case, direct server communication can be avoided by utilizing VLAN-to-VRF mapping defined on aggregation/distribution devices. VLAN-to-VRF mapping is also an essential tool for creating security zoning, were VLANs belonging to a common VRF (common security zone) can communicate without firewall policy enforcement or IPS inspection, while only traffic traversing the security zone boundaries is forwarded to the firewall or IPS for analysis.

Bridge operation mode

Bridge mode service appliances behave as switches or bridges forwarding traffic based on MAC address reachability, with client and server facing sides sharing the same IP subnet. Introduction of bridge mode service appliances does not require IP address changes, which makes them attractive. However, being Layer 2 entities, they must play "nicely" with the Layer 2 bridging domain to prevent formation of loops. This can be achieved by either not forwarding traffic on standby appliances (assuming two appliances exist for high-availability) or passing Spanning Tree BPDUs on both active and standby appliances while allowing the surrounding Layer 2 network to converge around them (Spanning Tree-wise).

Default gateway functionality is maintained on aggregation/distribution layer devices, which are reachable through the bridge mode service appliances.

Bridge mode appliances are almost always Layer 2 adjacent to the servers behind them, which means there are no routed hop(s) in between. Such MAC based forwarding simplifies achieving traffic flow symmetry needed for stateful service appliances, although distributed data centers or cloud deployments still require attention, especially when fist-hop-routing-protocol localization methods are being utilized.

Direct communication between the servers, which can bypass firewall or IPS inspection, is easily prevented by putting those servers in different VLANs, so the traffic headed toward the default gateway for inter-VLAN routing is inspected as it passes through the service appliances. Private VLANs or port ACLs could also be employed to isolate servers from each other and create Layer 2 security zoning.

To summarize, there is no golden rule in regard to which of the modes is better and it really comes down to individual use cases or, at times, the personal preferences of the network designers.

Application resilience

Application resilience is achieved by bundling servers (physical or virtual) into server farm constructs reachable through the virtual IP address owned by the load-balancer service appliance. The use of server farms implies that a failure of individual server within the farm goes unnoticed for clients, while the load-balancer simply stops forwarding traffic to the failed server. Such behavior guarantees application availability as long as there are "live" servers in the farm. Of course performance might be impacted due to the fact that there are less servers processing client requests.

Load-balancers implement a variety of traffic distribution algorithms across the servers in the farm based on the desired logic. Such algorithms range from simplistic round robin to sophisticated Application Layer intelligence. Encrypted traffic, such as the case with SSL, represents an issue if Application Layer load-balancing is desired, simply because of the fact that application payload is encrypted and is not available for load-balancing inspection. To address this case, encrypted traffic must be decrypted by either the load-balancer or a front-end SSL termination point (aka reverse proxy), which then allows clear-text payload be inspected for application layer load-balancing decision.

In data center environments SSL termination functionality is usually collapsed in the load-balancers themselves, which can operate in either routed or bridged mode, as discussed. In routed mode, after the load-balancing decision is made, traffic is forwarded to the servers by rewriting destination IP address of the load-balanced packets to match the server chosen by the distribution algorithm. In case of the bridged mode load-balancer, the same process occurs for destination MAC addresses.

Application delivery optimization

Some load-balancers can also perform traffic optimization for a variety of Application Layer protocols, however more comprehensive functionality is often offered by dedicated traffic optimizing appliances, which have richer feature-set and L7 protocol support. Traffic optimization plays a critical role in conserving WAN link bandwidth and in most cases greatly contributes to increased network performance.

Topological positioning of traffic optimization functionality often coincides with the data center L2/L3 boundary due to a popular traffic redirection method using the WCCP protocol. Branch offices often implement traffic optimizing appliances at the edge and use WCCP redirection as well. Cisco, for example, incorporates WAN optimization functionality into a network module, which can be inserted into ISR G1 or G2 edge router for even further simplified branch topology.

Improved performance can be mainly attributed to three factors. Data redundancy elimination performs caching function where repetitive blocks of data are not transmitted across the WAN, but rather the intent to transmit them is signaled between the WAN optimizing appliances on both ends of the link. The data itself is generated on the appliance closer to the data recipient, which saves WAN bandwidth.

TCP flow optimization aims to improve slow-start behavior of the TCP/IP stack and makes TCP window size "recover" faster and more efficiently following packet loss.

Finally, application specific acceleration relies on well-known behavioral characteristics, however, you have to be careful when dealing with home-grown applications, which do not behave according to what WAN optimizing appliances expect. This can actually break them, rather than accelerate their performance.

WAN optimization shares similar challenges with the load-balancing when traffic is encrypted. Encrypted traffic needs to be decrypted by either WAN optimizing appliances or a front-end SSL termination point in order to make full use of the application delivery acceleration.

Virtual appliances

Multi-tenancy and infrastructure-as-a-service concepts, which feed into the cloud computing models, represent a different set of design considerations in regard to the data center infrastructure service delivery. Cloud computing makes use of pervasive server virtualization technologies, where server mobility for the purposes of distributed resource management or disaster recovery becomes essential component of the solution portfolio.

As discussed, stateful service appliances rely on traffic flow symmetry to have visibility into bi-directional communication between clients and servers. This principle has heavy implications when virtual server mobility is concerned, because it means that even after the mobility event, bi-directional client-server communication needs to pass through the original set of service appliances where the initial traffic flow state was created prior to the VM move, even if that means that Data Center Interconnect link needs to be crossed. Such flow will incur additional network latency and potentially run into bandwidth issues resulting from the fact that original service appliances act as a "fan-out" for this type of traffic.

This behavior also somewhat contradicts the principles of building cloud-computing environments, since network performance issues can inhibit the ability to deliver the any-service-anywhere we expect from true cloud deployments. The same can be said about maintaining security policy and segmentation between cloud tenants or accelerating client-server application traffic throughout virtual server mobility events.

It is clear that a new concept in infrastructure service delivery needs to be created, but do we really need to completely "reinvent the wheel"?

Not necessarily. We all have grown to trust the virtual switching currently happening in the Hypervisor Layer. We also know that having virtual machine mobility does not impair our ability to carry its network connectivity properties, such as for example VLAN assignment, from source to destination physical server. This is seamlessly done by the distributed virtual switch, which is either embedded in the hypervisor or installed on top of it.

Now, what if we could make our stateful service appliances behave in the similar way by "following" the virtual machine as it moves around, rather than staying still and expecting virtual machine or, more precisely, the network in between, to deliver the traffic in a "fan-out" fashion?

This is exactly what conceptually happens with virtualized service appliances. Technically, traffic still gets forwarded from the virtual servers to the virtual service appliances, however all of this is seamlessly done on the hypervisor level by the virtual service appliances themselves, once they have been set up. The outside data center fabric is no longer tasked with keeping traffic symmetry, security segmentation and policy enforcement or application acceleration. Sweet.

Virtual service appliances run on top of the hypervisor and work in tandem with distributed virtual switches to make sure that services are applied to the virtual machine wherever it goes in consistent fashion.

As cloud computing is gaining more traction in the service provider and enterprise space, so does the evolution of virtual service appliances and hypervisor delivered services. Some are still skeptical about how well do those platforms perform compared to hardware appliances and although such concerns are not entirely unsubstantiated, the advantages of flexibility and adaptability cannot be underestimated. Keep your eyes open for things to come.

Contact the author: klebanov@cisco.com

Read more about data center in Network World's Data Center section.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingmanagementinternetwirelessNetworkingData Centernetwork infrastructurehardware systemsConfiguration / maintenanceTraffic managementNV

More about CarrierCiscoetworkISR

Show Comments
[]