Menu
Seven Steps to a Green Data Centre

Seven Steps to a Green Data Centre

Green data centres don't just save energy, they also reduce the need for expensive infrastructure upgrades to deal with increased power and cooling demands.

The potential savings of leveraging power management with the latest processors are significant. AMD's newest designs will scale back voltage and clock frequency on a per-core basis and will reduce the power to memory, another rapidly rising power hog. "At 50 percent CPU utilization, you'll see a 65 percent savings in power. Even at 80 percent utilization, you'll see a 25 percent savings in power," just by turning on power management, says Kerby. Other chip makers are working on similar technologies.

In some cases, power management may cause more problems that it cures, says Jason Williams, chief technology officer at DigiTar, a US-based messaging logistics service provider. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and [power management] can cause some very screwy behaviours in the operating system," he says. "We've seen random kernel crashes primarily. Some systems seem to run Linux fine with ACPI turned on, and others don't. It's really hard to predict, so we generally turn it and any other power management off."

ACPI is Advanced Configuration and Power Interface, a specification co-developed by HP, Intel, Microsoft and other industry players.

Upgrade to energy-efficient servers

The first generation of multicore chip designs showed a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40 percent less power," says Lori Wigle, director of server technology and initiatives marketing at Intel. Moving to servers based on these designs should increase energy efficiency.

Future gains, however, are likely to be more limited. Sun Microsystems, Intel and AMD all say they expect their servers' power consumption to remain flat in the near term. AMD's current processor offerings range from 89W to 120W. "That's where we're holding," says AMD's Kerby. For her part, Wigle also doesn't expect Intel's next-generation products to repeat the efficiency gains of the 5100. "We'll be seeing something slightly more modest in the transition to 45-nanometer products," she says.

Chip makers are also consolidating functions such as I/O and memory controllers onto the processor platform. Sun's Niagra II includes a Peripheral Component Interconnect Express bridge, 10 Gigabit Ethernet and floating-point functions on a single chip. "We've created a true server on a chip," says Rick Hetherington, chief architect and distinguished engineer at Sun.

But that consolidation doesn't necessarily mean lower overall server power consumption at the chip level, says an engineer at IBM's System x platform group who asked not to be identified. Overall, he says, net power consumption will not change. "The gains from integration ... are offset by the newer, faster interconnects, such as PCIe Gen2, CSI or HT3, FBDIMM or DDR3," he says.

Go with high-efficiency power supplies

Power supplies are a prime example of the lack of focus on total cost of ownership in the server market because inefficient units that ship with many servers today waste more energy than any other component in the data centre, says John Koomey, a consulting professor at Stanford University and staff scientist at Lawrence Berkeley National Laboratory. He led an industry effort to develop a server energy management protocol.

Progress in improving designs has been slow. "Power-supply efficiencies have increased at about one half percent a year," says Intel's Wigle. Newer designs are much more efficient, but in the volume server market, they're not universally implemented because they're more expensive.

With the less-efficient power supplies found in many commodity servers, efficiency peaks at 70 percent to 75 percent at 100 percent utilization but drops into the 65 percent range at 20 percent utilization - and the average server load is in the 10 percent to 15 percent range. That means that inefficient power supplies can waste nearly half of the power before the power even gets to the IT equipment. The problem is compounded by the fact that every watt of energy wasted by the power supply requires another watt of cooling system power just to remove the resulting waste heat from the data centre.

Power supplies are available today that attain 80 percent or higher efficiency - even at 20 percent load - but they cost significantly more. High-efficiency power supplies carry a 15 percent to 20 percent premium, says Lakshmi Mandyam, director of marketing at US-based power supply vendor ColdWatt.

Still, moving to these more energy-efficient power supplies reduces both operating costs and capital costs. "If they spent $US20 on [an energy-efficient] power supply, you would save $US100 on the capital cost of cooling and infrastructure equipment," says RMI's Lovins. Any power supply that doesn't deliver 80 percent efficiency across a range of low load levels should be considered unacceptable, he says.

To make matters worse, Sun's Hetherington says, server manufacturers have traditionally overspecified power needs, opting for a 600W power supply for a server that really should only need 300W. "If you're designing a server, you don't want to be close to threatening peak [power] levels. So you find your comfort level above that to specify the supply," he says. "At that level, it may only be consuming 300W, but you have a 650W power supply taxed at half output, and it's at its most inefficient operating point. The loss of conversion is huge. That's one of the biggest sinners in terms of energy waste," he says.

All of the major server vendors say they already offer or are phasing in more efficient power supplies in their server offerings.

HP is in the process of standardizing on a single power supply design for its servers. Paul Perez, vice president of storage, network and infrastructure, spoke at a recent Uptime Institute conference. "Power supplies will ship this summer with much higher efficiency," he said, adding that HP is trying to increase efficiency percentages into the "mid-90s". HP's Belady says all of his employer's servers use power supplies that are at least 85 percent efficient.

Smart power management can also increase power supply utilization levels. For example, HP's PowerSaver technology turns off some of the six power supplies in a C-class blade server enclosure when total load drops; this saves energy and increases efficiency.

One resource IT can use when determining power-supply efficiency is the results at 80Plus.org. This certification program, initiated by electric utilities, lists power supplies that consistently attain an 80 percent efficiency rating at 20 percent, 50 percent and 100 percent loads.

Stanford University's Koomey says that Google took an innovative approach to improving power-supply efficiency in its server farms. Part of the expense of power-supply designs lies in the fact that you need multiple outputs at different DC voltages. "In doing their custom motherboards ... they went to the power supply people and said: 'We don't need all of those DC outputs. We just need 12 volts.'" By specifying a single, 12-volt output, Google saved money in the design that then went toward delivering a higher efficiency power supply. "That is the kind of thinking that's needed," he says.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Advanced Micro Devices Far EastAMDBillEnvironmental Protection AgencyGoogleHewlett-Packard AustraliaHISHPIBM AustraliaIntelMicrosoftPG&EPLUSStanford UniversitySun MicrosystemsVanguard GroupVMware Australia

Show Comments
[]