This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Network congestion has compelled organizations to deploy traffic shaping and quality of service (QoS) appliances just before the WAN router to control outbound traffic. But in today's complex environments, organizations are rethinking how to manage the onslaught of data flowing across the network, and the focus of congestion control has increasingly shifted to traffic flowing inbound from the many data sources.
While outbound QoS has been sufficient for controlling corporate network traffic in the past, two significant trends are driving the need for a shift to inbound QoS:
1) Any-to-any networks and applications: The adoption of mesh network topologies has enabled routing of business application traffic directly from one branch office to another. The types of applications that traverse these paths commonly include VoIP, desktop videoconferencing, and unified communications tools such as Microsoft Lync. When a user in one branch calls a user in another branch office, the VoIP call gets routed point-to-point without requiring traffic to be backhauled to the corporate data center. As a result, this traffic competes with application data coming from the data center as well as from other sites. In this case, outbound traffic at any given location has little if any effect. The only place where all sources of incoming traffic can be effectively controlled is at the receiving location itself.
TECH ARGUMENT: IETF vs. ITU: Internet standards face-off
2) SaaS: A number of organizations have adopted SaaS applications and public cloud services, and lower-cost direct connections to the Internet for branch offices to access the SaaS resources. Applications accessed over the Internet compete for bandwidth with recreational traffic at the branch office, meaning business applications may struggle for needed resources. Data incoming from the corporate data center over a private WAN only exacerbates the problem.
To ensure critical applications perform predictably with a high level of performance it is essential to control less important traffic and to make room on the network for vital data to get through. Placing devices at third-party websites to do outbound QoS is generally not an option, making the point at which traffic enters the corporate network the only possible place to adequately control bandwidth usage.
How is controlling inbound different?
At first glance it may appear that QoS should function the same whether it is deployed on the outbound or the inbound direction. However, there are two subtle but important differences that come into play when controlling application traffic with inbound QoS:
For inbound QoS to function, the inbound traffic control solution must be the point at which traffic is queued -- essentially the bottleneck. Traffic arriving on-site after being rate-limited by an upstream router renders the QoS solution implemented at the receiving location ineffective. The upstream bottleneck is commonly an unmanaged first-in-first-out (FIFO) queue that gives no consideration to the determined business requirements of the receiving organization and especially for latency-sensitive applications such as VoIP will negatively impact the performance.
For an inbound QoS solution to be the authoritative control point for traffic entering a site, it must employ unique techniques to ensure it solely plays the role of traffic shaper. Several key mechanisms help to ensure that incoming applications are accurately controlled and given the bandwidth and priority needed to match the identified requirements of the organization.
* TCP control: Smooth out TCP microbursts. With native TCP congestion control, a flow starts off in slow-start mode and increases the send rate until the full window size is reached or congestion is detected on the link. When congestion is detected, the TCP sender switches to congestion-avoidance mode, which throttles back the rate at which data is sent. This situation is normal TCP behavior but results in a traffic queue that collects at the upstream service provider, preventing the downstream QoS solution from effectively controlling traffic. To ensure traffic queues build up at the inbound QoS enforcement solution instead of the upstream router, inbound QoS makes small scheduling adjustments at well-chosen times so that slow-starting TCP flows remain below the bottleneck rate.
* Flow padding: Make room for new flows. Flow padding is a form of short-term bandwidth reservation done on a per-flow basis that takes advantage of standard TCP congestion control behavior to help shape traffic. When a new traffic flow appears, the inbound QoS engine frees up bandwidth just in time to make space for the identified traffic to get through the network without causing congestion at the upstream bottleneck. Other incoming TCP flows respond by scaling back their sending rates in proportion to the bandwidth that was allocated to the flow being shaped -- standard TCP behavior. Flow padding is executed for very short durations at a time, on the order of tens to hundreds of milliseconds -- just long enough to enforce configured policies -- while maintaining control over the bottleneck and guaranteeing that flows quickly converge to their proper sending rates.
* Link padding: Be the bottleneck. Link padding reserves a small, fixed amount of the available bandwidth -- as little as 2.5% -- to ensure the inbound QoS solution is the bottleneck for the traffic that it is scheduling. For example, if inbound QoS is configured on a 10Mbps WAN connection, removing 250Kbps from the link creates opportunity to shape traffic because it sees traffic arriving at or below the line rate.
Inbound control requires careful orchestration of the competing priorities and behaviors of application flows. Solutions that feature advanced inbound control technology ensure each application gets the right level of service across the network.
This capability paired with intelligent traffic identification and scheduling that takes into account not just bandwidth allocation, but also the sensitivity of a given application to latency, allows business applications to perform predictably when delivered over the enterprise WAN as well as the public Internet.
Miles Kelly is senior director of product marketing for Steelhead and Granite appliances at Riverbed Technology.
Read more about lan and wan in Network World's LAN & WAN section.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.