CIO

IT networking gear goes green

Servers get most of the glory when it comes to energy management, but networking gear is about to catch up.

Over the past year, network equipment vendors have begun to emphasize energy-efficiency features, something that was never a top priority before, says Dale Cosgro, a product manager in Hewlett-Packard Co.'s ProCurve network products organization.

Data Center Energy Stats

How much power does data center gear consume?

Cisco Catalyst 6500 series switch, fully populated: 2kW to 3kW per rack

Cisco Nexus 7000 series switch, fully configured: 13kW per rack

Fully loaded rack of servers, average load: 4kW per rack

Sources: Cisco; HP Critical Facilities Services

Networking infrastructure isn't in the same class as servers or storage in terms of overall power consumption -- there are far more servers than switches -- but networking can account for up to 15 per cent of the total power budget.

And unlike servers, which have sophisticated power management controls, networking equipment must always be on and ready to accept traffic.

Also, networking power use at the rack level is significant. A Cisco Catalyst 6500 series switch consumes as much as 2kW to 3kW per 42U-high rack. Cisco Systems Inc.'s largest enterprise-class switches, the Nexus 7000 series, can consume as much as 13kW per rack, according to Rob Aldrich, an architect in Cisco's advanced services group. A 13kW cabinet generates more heat than many server racks -- enough that it requires careful attention to cooling.

By way of comparison, most data centers top out at between 8kW and 10kW for server racks, says Rakesh Kumar, an analyst at Gartner Inc. The average cabinet consumes about 4kW, says Peter Gross, vice president and general manager of HP Critical Facilities Services.

Vendors have already adopted some energy-related features, such as high-efficiency power supplies and variable-speed cooling fans. But with switches, there's a limit to what can be done in the area of power management today. Most idle switches still consume 40 per cent to 60 per cent of maximum operating power. Anything less than 40 per cent compromises performance, says Aldrich. "Unless users want to accept latency, you have to have the power," he adds.

But huge improvements are coming, says Cosgro.

More-Efficient Technology

Technology improvements that favor energy efficiency are gradually emerging in several areas. "As new generations of products hit the market, more of these kinds of features will be implemented," says Cosgro.

Some examples include more modular application-specific integrated circuit (ASIC) designs that allow switches to turn off components not in use, from LED panel lights to tables in memory.

Also, general advances in silicon technology will minimize current leakage and gradually boost energy efficiency with each new generation of chips. Eventually, says Cosgro, "we should be able to get networking equipment that uses 100 watts today down to 10 watts."

Improvements in other areas have also helped. Software, for example, is now more efficient, consuming fewer CPU cycles -- and less energy. And hardware is now designed to run at higher operating temperatures to reduce cooling costs.

For example, Cosgro claims that HP's current ProCurve equipment can run safely at temperatures up to 130 degrees -- higher than the specifications for most other data center equipment. "That's driven by requirements of IT managers who want to run data centers at higher temperatures," he says.

It may be possible to move to higher operating temperatures in a single-vendor wiring closet, but network equipment vendors will need to do a better job of testing in mixed environments before temperatures approaching 130 degrees can be sustained -- especially within racks in the data center. "No one knows how networking and other types of equipment will react when sitting next to servers that displace more BTUs," says Drue Reeves, an analyst at Burton Group. Today, each vendor tests with only its own equipment.

More sophisticated power-monitoring systems will also help save energy, as will management tools with more granular controls. Real-time power and temperature monitoring is key to any data center and is essential for managing growth. "If something is not right, you want to know about it before a catastrophe happens," says Rockwell Bonecutter, global lead of Accenture Ltd.'s green IT practice.

Management software could be configured to identify specific network equipment, such as voice-over-IP phones, by using the Link Layer Discovery Protocol. The software could then automatically shut off Power-over-Ethernet current for VoIP handsets at a specific time of day or when the associated PC on each desktop is turned off at day's end.

Another example: Edge switches are typically connected to two routers for redundancy during the day, but a network could be configured to have one router go into low-power sleep mode at night. The sleeping router would "wake up" only when or if it was needed.

These types of applications represent "a huge opportunity for savings," says Cosgro.

Better Standards

Emerging standards could soon help save energy during periods when networks sit unused and will help IT compare the relative efficiency of competing products.

The new IEEE P802.3az Energy Efficient Ethernet (EEE) standard, approved on Sept. 30, may offer the biggest bang for the buck by cutting power consumption for network equipment when utilization is low.

Today, Ethernet devices continuously transmit power between devices, even when network traffic is at a standstill. Equipment supporting the EEE standard will send a pulse periodically but stay quiet the rest of the time, cutting power use by more than 90 per cent during idle periods.

In a large network, that's "a whole lot of energy" that could be saved, Cosgro says.

The standard will allow "downshifting" in other modes of operation as well. In a 10Gbit switch, for example, individual ports that are supporting only a 1Gbit load will be able to drop power down from 10Gbit/sec. to what's required to support a 1Gbit/sec. configuration, saving energy until activity picks up again.

Products built to support the EEE standard should start appearing by 2011, says Aldrich.

Another emerging technology, the PCI-SIG's Multi-Root I/O Virtualization specification, gives servers within a rack access to a shared pool of network interface cards. This happens via a high-speed PCI Express bus -- essentially extending the PCIe bus outside of the server. "Instead of a [network interface card] in every server, you'll have access to a bank of NICs in a rack, and you can assign portions of the bandwidth of one of those NICs to a server," probably using tools provided by the server vendor, says Reeves.

Energy savings will come from increased utilization of the network -- achieved by splitting up the bandwidth in each "virtual NIC" -- and the need for fewer NICs and switch ports, he says. He expects to see standards-compliant products perhaps as early as 2012.

Energy-Efficiency Ratings

Finally, standardized measurements of energy efficiency have started to appear on some networking equipment. Juniper Networks Inc., for example, includes the Energy Consumption Rating on the data sheets for some of its products. ECR is a draft specification, created by the ECR Initiative consortium. Lawrence Livermore National Laboratory, Ixia and Juniper developed the specification, which measures performance per energy unit for networking and telecommunications equipment.

Both Cisco and Juniper are backing the Alliance for Telecommunications Industry Solutions' Telecommunications Energy Efficiency Rating specification, which ATIS introduced last year.

However, neither specification has been universally accepted. Juniper supports ECR but doesn't include the rating on all of its products' data sheets. Cosgro says HP hasn't included either of those energy-efficiency standards in its data sheets because users don't understand the metrics.

"What they care about is the number of watts used," he says.

Another strike against the specifications, Cosgro says, is that they lack a detailed, open rating methodology. That means vendors can choose rating firms that use methodologies that best suit their needs.

A truly open specification isn't likely to appear until next year at the earliest, when the Environmental Protection Agency starts work on an Energy Star rating for large networking equipment.

The agency announced an Energy Star specification for data centers in June and plans to eventually develop specifications for data center UPSs and cooling systems, according to a spokesman. These new specifications, which will cover everything from power supplies and internal chips to Energy Efficient Ethernet, will be "the key energy-efficiency standard" going forward, Cosgro says.

The easiest way to increase energy efficiency is to buy new equipment, but that's not necessarily a practical option, because network administrators making purchasing decisions must consider other factors besides potential energy savings -- such as the remaining useful life of their current equipment. A 15 per cent cut in energy costs may add up when spread across thousands of servers, but the total savings would be much smaller on a few racks of switches.

Even for a single rack, the cost per kilowatt usually won't justify an upgrade. "Most people will not save enough energy in the short run to justify replacing their equipment," warns Burton Group's Reeves. "Stay on your regular life cycle."

Green Tips

5 Ways to Green Your Network

1. Refresh your equipment. Cisco estimates that the energy efficiency of its products improves 15 per cent to 20 per cent every two to three years. The energy savings alone aren't enough to justify buying new equipment, but improving efficiency is one of several reasons for keeping your refreshes on schedule.

2. Make use of energy-efficient features. These features can vary by vendor -- or even by model -- so check before you buy. For example, Cisco's Nexus 7000 switch can reduce power consumption in empty line-card slots, but that feature is not yet available in the vendor's more popular Catalyst 6500 series. Other vendors, such as Hewlett-Packard, allow you to turn off empty slots, but the process is a manual one. Juniper Networks lets administrators cut power to unused ports, but only by writing a script that lowers the power once a certain activity threshold is reached.

3. Virtualize. Server virtualization increases network utilization and reduces network equipment needs by allowing multiple virtual servers to share one or more network adapters within the confines of a single physical server. On the switch side, features such as Cisco's Virtual Switching System allow one switch to function like many, which means more than one server can connect to the same port. This works because most organizations overprovision switching capacity based on peak loads. Reducing the total number of physical ports required lowers overall power consumption. Similarly, HP's Virtual Connect technology abstracts HP server blades from Ethernet and Fibre Channel networks. It requires fewer network interface cards, reduces cabling requirements and increases network utilization.

4. Be careful with cabinets. Make sure networking equipment that goes into a hot aisle/cold aisle row uses front-to-back airflow, not side-to-side cooling. Vendors prefer side-to-side venting, which allows them to get more equipment into the rack, but units using a side-to-side design may blow hot air back into the cold aisle -- or directly into an adjacent rack, overheating it. If the vendor doesn't offer switching equipment that supports front-to-back airflows, you'll need to retrofit the cabinet with a conversion kit, available from vendors such as Panduit Corp. and Chatsworth Products Inc., which redirects it for use in a hot aisle/cold aisle configuration.

5. Use a structured network design. Your best bet for the greatest energy efficiency is to follow the Telecommunications Industry Association's TIA-942 Telecommunications Infrastructure Standard for Data Centers, says Rockwell Bonecutter, global lead of Accenture's green IT practice. The specification locates networking equipment in a main distribution area, which ultimately connects to servers, storage and other IT equipment in individual racks.

This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.