CIO

Seven Steps to a Green Data Centre

Green data centres don't just save energy, they also reduce the need for expensive infrastructure upgrades to deal with increased power and cooling demands.

How green is your data centre? If you don't care now, you will soon. Most data centre managers haven't noticed the steady rise in electricity costs, since they don't usually see those bills. But they do see the symptoms of surging power demands.

High-density servers are creating hot spots in data centres that have surpassed 30 kilowatts per rack for some high-end systems. As a result, some data centre managers are finding that they can't get enough power distributed out to those racks on the floor. Still others are finding that they can't get more power to the building: they've maxed out the power utility's ability to deliver additional capacity to that location.

The problem already has Mallory Forbes' attention. "Every year, as we revise our standards, the power requirements seem to go up," says Forbes, senior vice president and manager of mainframe technology at US-based Regions Financial. "It creates a big challenge in managing the data centre because you continually have to add power."

Energy efficiency savings can add up. A watt saved in data centre power consumption saves at least a watt in cooling. IT managers who take the long view are already paying attention to the return on investment associated with acquiring more energy-efficient equipment. "Energy becomes important in making a business case that goes out five years," says Robert Yale, principal of technical operations at US-based The Vanguard Group. His 5600-square-metre data centre caters mostly to Web-based transactions. While security and availability come first, he says Vanguard is "focusing more on the energy issue than we have in the past".

Green data centres don't just save energy, they also reduce the need for expensive infrastructure upgrades to deal with increased power and cooling demands. Some organizations are also starting to take the next step and are looking at the entire data centre from an environmental perspective.

Following these steps will keep astute data centre managers ahead of the game.

Consolidate your servers, and consolidate some more

Existing data centres can achieve substantial savings by making just a few basic changes, and consolidating servers is a good place to start, says Ken Brill, founder and executive director of US-based consultancy The Uptime Institute. The Uptime Institute has studied this issue for several years. In many data centres, Brill says, "between 10 percent and 30 percent of servers are dead and could be turned off".

Cost savings from removing physical servers can add up quickly - up to $US1200 in energy costs per server per year, according to one estimate. "For a server, you'll save $US300 to $US600 each year in direct energy costs. You'll save another $US300 to $US600 a year in cooling costs," says Mark Bramfitt, senior program manager in customer energy management at Pacific Gas & Electric (PG&E). The US-based utility offers a "virtualization incentive" program that pays $US150 to $US300 per server removed from service as a result of a server consolidation project.

Once idle servers have been removed, data centre managers should consider moving as many server-based applications as feasible into virtual machines. That allows IT to substantially reduce the number of physical servers required while increasing the utilization levels of remaining servers.

Most physical servers today run at about 10 percent to 15 percent utilization. Since an idle server can consume as much as 30 percent of the energy it consumes at peak utilization, you get more bang for your energy dollar by increasing utilization levels, says Balkansky.

To that end, VMware is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads between physical servers that are treated as a single resource pool. Distributed Power Management will "squeeze virtual machines on as few physical machines as possible", Balkansky says, and then automatically power down servers that are not being used. The system makes adjustments dynamically as workloads change. In this way, workloads might be consolidated in the evening during off-hours, and then reallocated across more physical machines in the morning, as activity increases.

Turn on power management

Although power management tools are available, administrators today don't always make use of them. "In a typical data centre, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chairman and chief scientist at the Rocky Mountain Institute, a US-based energy and sustainability research firm.

Just taking full advantage of power management features and turning off unused servers can cut data centre energy requirements by about 20 percent, he adds.

That's not happening in many data centres today because administrators focus almost exclusively on uptime and performance, and IT staffers aren't comfortable yet with available power management tools, says Christian Belady, distinguished technologist at Hewlett-Packard. He argues that turning on power management can actually increase reliability and uptime by reducing stresses on data centre power and cooling systems.

Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager at Advanced Micro Devices' server team. While AMD and other chip makers are implementing new power management features, "in Microsoft Windows, support is inherent, but you have to adjust the power scheme to take advantage of it", he says. Kerby says that should be turned on by default. "Power management technology is not leveraged as much as it should be," he adds.

Page Break

The potential savings of leveraging power management with the latest processors are significant. AMD's newest designs will scale back voltage and clock frequency on a per-core basis and will reduce the power to memory, another rapidly rising power hog. "At 50 percent CPU utilization, you'll see a 65 percent savings in power. Even at 80 percent utilization, you'll see a 25 percent savings in power," just by turning on power management, says Kerby. Other chip makers are working on similar technologies.

In some cases, power management may cause more problems that it cures, says Jason Williams, chief technology officer at DigiTar, a US-based messaging logistics service provider. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and [power management] can cause some very screwy behaviours in the operating system," he says. "We've seen random kernel crashes primarily. Some systems seem to run Linux fine with ACPI turned on, and others don't. It's really hard to predict, so we generally turn it and any other power management off."

ACPI is Advanced Configuration and Power Interface, a specification co-developed by HP, Intel, Microsoft and other industry players.

Upgrade to energy-efficient servers

The first generation of multicore chip designs showed a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40 percent less power," says Lori Wigle, director of server technology and initiatives marketing at Intel. Moving to servers based on these designs should increase energy efficiency.

Future gains, however, are likely to be more limited. Sun Microsystems, Intel and AMD all say they expect their servers' power consumption to remain flat in the near term. AMD's current processor offerings range from 89W to 120W. "That's where we're holding," says AMD's Kerby. For her part, Wigle also doesn't expect Intel's next-generation products to repeat the efficiency gains of the 5100. "We'll be seeing something slightly more modest in the transition to 45-nanometer products," she says.

Chip makers are also consolidating functions such as I/O and memory controllers onto the processor platform. Sun's Niagra II includes a Peripheral Component Interconnect Express bridge, 10 Gigabit Ethernet and floating-point functions on a single chip. "We've created a true server on a chip," says Rick Hetherington, chief architect and distinguished engineer at Sun.

But that consolidation doesn't necessarily mean lower overall server power consumption at the chip level, says an engineer at IBM's System x platform group who asked not to be identified. Overall, he says, net power consumption will not change. "The gains from integration ... are offset by the newer, faster interconnects, such as PCIe Gen2, CSI or HT3, FBDIMM or DDR3," he says.

Go with high-efficiency power supplies

Power supplies are a prime example of the lack of focus on total cost of ownership in the server market because inefficient units that ship with many servers today waste more energy than any other component in the data centre, says John Koomey, a consulting professor at Stanford University and staff scientist at Lawrence Berkeley National Laboratory. He led an industry effort to develop a server energy management protocol.

Progress in improving designs has been slow. "Power-supply efficiencies have increased at about one half percent a year," says Intel's Wigle. Newer designs are much more efficient, but in the volume server market, they're not universally implemented because they're more expensive.

With the less-efficient power supplies found in many commodity servers, efficiency peaks at 70 percent to 75 percent at 100 percent utilization but drops into the 65 percent range at 20 percent utilization - and the average server load is in the 10 percent to 15 percent range. That means that inefficient power supplies can waste nearly half of the power before the power even gets to the IT equipment. The problem is compounded by the fact that every watt of energy wasted by the power supply requires another watt of cooling system power just to remove the resulting waste heat from the data centre.

Power supplies are available today that attain 80 percent or higher efficiency - even at 20 percent load - but they cost significantly more. High-efficiency power supplies carry a 15 percent to 20 percent premium, says Lakshmi Mandyam, director of marketing at US-based power supply vendor ColdWatt.

Still, moving to these more energy-efficient power supplies reduces both operating costs and capital costs. "If they spent $US20 on [an energy-efficient] power supply, you would save $US100 on the capital cost of cooling and infrastructure equipment," says RMI's Lovins. Any power supply that doesn't deliver 80 percent efficiency across a range of low load levels should be considered unacceptable, he says.

To make matters worse, Sun's Hetherington says, server manufacturers have traditionally overspecified power needs, opting for a 600W power supply for a server that really should only need 300W. "If you're designing a server, you don't want to be close to threatening peak [power] levels. So you find your comfort level above that to specify the supply," he says. "At that level, it may only be consuming 300W, but you have a 650W power supply taxed at half output, and it's at its most inefficient operating point. The loss of conversion is huge. That's one of the biggest sinners in terms of energy waste," he says.

All of the major server vendors say they already offer or are phasing in more efficient power supplies in their server offerings.

HP is in the process of standardizing on a single power supply design for its servers. Paul Perez, vice president of storage, network and infrastructure, spoke at a recent Uptime Institute conference. "Power supplies will ship this summer with much higher efficiency," he said, adding that HP is trying to increase efficiency percentages into the "mid-90s". HP's Belady says all of his employer's servers use power supplies that are at least 85 percent efficient.

Smart power management can also increase power supply utilization levels. For example, HP's PowerSaver technology turns off some of the six power supplies in a C-class blade server enclosure when total load drops; this saves energy and increases efficiency.

One resource IT can use when determining power-supply efficiency is the results at 80Plus.org. This certification program, initiated by electric utilities, lists power supplies that consistently attain an 80 percent efficiency rating at 20 percent, 50 percent and 100 percent loads.

Stanford University's Koomey says that Google took an innovative approach to improving power-supply efficiency in its server farms. Part of the expense of power-supply designs lies in the fact that you need multiple outputs at different DC voltages. "In doing their custom motherboards ... they went to the power supply people and said: 'We don't need all of those DC outputs. We just need 12 volts.'" By specifying a single, 12-volt output, Google saved money in the design that then went toward delivering a higher efficiency power supply. "That is the kind of thinking that's needed," he says.

Page Break

Break down internal business barriers

While IT has carefully tracked performance and uptime, most IT organizations aren't held accountable for energy efficiency due to the separation of IT functions from the facilities group. The former generates the load, while the latter usually gets the power bill, says Uptime Institute's Brill.

Breaking down those barriers is critical to understanding the challenge - and providing a financial incentive for change. Better communication among groups is also essential as cooling moves from simple room-level air conditioning to targeted cooling systems that move heat exchangers up to - or even inside - the server rack.

The line between facilities and IT responsibilities in the data centre is blurring. "The solutions won't happen without coordination by people who hardly talk to each other because they're in different offices or different tribes," says Rocky Mountain's Lovins.

The stovepiping problem has also afflicted IT equipment vendors, says Lovins. Engineers are now specialized, often designing components in a vacuum without looking at the overall system - in this case, the data centre - into which their component plays. What used to be a holistic design process that optimized an entire system for multiple benefits got "sliced into pieces", he says, with one specialist "designing one component or optimizing a component for single benefits", he says.

Follow the standards

Several initiatives are under way that may help users identify and buy the most energy-efficient IT equipment. These include the 80 Plus program for power supplies, as well as a planned Energy Star certification program for servers. Under a congressional mandate, the US Environmental Protection Agency is working with Lawrence Berkeley Laboratories to study ways to promote the use of energy-efficient servers. A specification could be in place later this year.

The Standard Performance Evaluation Corp. (SPEC) is also working on a performance-per-watt benchmark for servers that should help provide a baseline for energy efficiency comparisons. The specification is slated for release this year. When completed, the standard will be useful for making comparisons across platforms, says Klaus-Dieter Lange, chair of the SPEC Power and Performance Committee. The group working on the standard is meeting every week, says Lange. The benchmark will measure energy efficiency at different load levels, he says.

Advocate for change

IT equipment manufacturers won't design for energy efficiency unless users demand it. Joseph Hedgecock, senior vice president and head of platform and data centres at Lehman Brothers Holdings, says his company is lobbying vendors for more efficient server designs. "We're trying to push for more efficient power supplies and ultimately ... systems themselves," he says.

The Vanguard Group's Yale says his company is involved with The Green Grid and other industry organizations to push for greater energy efficiency. "We're becoming members of that, and I'm involved in different informational organizations," he says. "This is a topic that, industry wide we're discussing, and [we're] trying to work with manufacturers."