Menu
Power Pinch in the Data Centre

Power Pinch in the Data Centre

Rising power and cooling costs are catching some data centre managers by surprise. Here's why.

For John Rowell, chief technology officer (CTO) at OpSource, keeping a lid on data centre power costs is a make-or-break proposition. The US-based company hosts software-as-a-service offerings. As OpSource expanded its operations to meet customer demand between 2005 and 2006, its electricity costs spun out of control. "We had a 2.75 multiple in power costs over a nine-month period," he says, but OpSource's business model doesn't allow it to pass on those costs. "I had to eat it," Rowell says. Now he's more mindful of energy use.

Data centre energy demands, once a line-item footnote, are becoming a bigger concern as power and cooling loads continue to rise, according to US Computerworld's latest quarterly Vital Signs survey. Of 194 IT professionals surveyed in the US in February and March, 82 percent said they consider energy efficiency a factor when selecting IT equipment, and 20 percent of those at large companies said it's a big consideration.

Servers are central to the problem, representing 60 percent to 80 percent of power used in data centres, according to John Koomey, staff scientist at US-based Lawrence Berkeley National Laboratory. A study he recently conducted showed that server electricity use in US data centres doubled from 2000 to 2005.

"Data centres in the US are now consuming as much energy per square foot as the industrial sector," says Paul Perez, vice president of storage, network and infrastructure at Hewlett-Packard. That trend caught the attention of Congress, which last year directed the US Environmental Protection Agency to study ways to promote the use of energy-efficient servers in data centres. The EPA's work with Lawrence Berkeley is expected to lead to an Energy Star rating for servers.

Another study, by Christian Belady, distinguished technologist at HP, demonstrates that the per-server life-cycle cost of data centre infrastructure already exceeds the per-server acquisition cost. Electricity costs will surpass initial hardware costs next year - and that doesn't include the expense of cooling, which typically doubles the total power requirement. Rising operating costs also lead to higher capital expenses, because infrastructure - from cooling systems to power distribution and power supply systems - must scale to meet demand.

And power density is expected to continue its upward spiral. Industry projections show per-rack power densities hitting 45 kilowatts by 2014 (current designs top out at around 30 kilowatts), and US-based research firm IDC predicts that power costs will grow at four times the rate of spending on new servers through 2010.

OpSource reacted to its power problem by renegotiating its contracts with the service providers that house its servers, basing the agreements on power requirements first and space and cooling second. That's smart because 75 percent to 80 percent of infrastructure costs are now related to watts, not area, says Amory Lovins, chairman and chief scientist at the Rocky Mountain Institute, a nonprofit energy-efficiency consulting firm in the US.

Rowell now factors in energy costs when he buys new equipment. "We deploy servers based on watts per CPU. If we spend 10 percent more upfront for a potential energy life savings of 30 percent to 40 percent, that's very interesting to us," he says.

"On the equipment side, the low-hanging fruit [is] the power supply," says Koomey. The inefficient power supplies used in many volume servers can waste more than one-third of electricity before it reaches the IT equipment. That's because efficiency drops with the IT load. Server utilization rates of around 15 percent and widespread use of redundant power supplies keep efficiency low.

High-efficiency designs cost 15 percent to 20 percent more, but they extend efficiency well beyond 80 percent, even at low utilization levels, according to Al Rozman, vice president of engineering at ColdWatt, a US-based power supply vendor. Major server vendors all claim to be shipping or planning to ship high-efficiency power supplies in their volume server lines, and they expect to push efficiency above 90 percent. Belady says he expects power conversion efficiencies to improve by 30 percent during the next two to three years.

It makes good business sense to improve the energy efficiency of data centres, even if doing so means paying more for equipment upfront, says Rowell. "Because we have to run our infrastructure as a profit centre, we are very focused on [getting] the most efficiency we can out of the infrastructure we have in place," he says. "Traditional IT cost centres should be doing the same. It's irresponsible not to."

Identifying the Problem

Unfortunately, many data centre operators still don't see the problem coming. Forty-one percent of Vital Signs survey respondents said they still don't know how much energy their data centres use because they don't pay for it.

Philip Borneman, assistant director of IT for the city government of US county Charlotte, says he didn't know what his energy costs were until he moved to a new data centre. Suddenly, power was metered separately and billed to the IT budget. "That was the rude awakening," he says. Now the city has a strong incentive to keep costs under control.

"Most people are caught off-guard," says Sabet Elias, chief technology officer at US-based financial services firm Lehman Brothers Holdings. "Due to the lack of transparency [in data centre energy costs], most people only become aware of the problem when they're out of power," he says, noting that there are limits on how much electricity the local utility can run to a given facility. As power demands increase, more and more data centres are hitting that wall.

Joe Hedgecock, senior vice president and head of platform and data centres at Lehman Brothers, says power consumption is becoming one of his top concerns. "We're more constrained by power and cooling these days than by space," he says.

The Wall Street firm has 13,000 servers in six data centres worldwide and is migrating many of them onto server blades. That saves space but creates hot spots that require supplemental, targeted cooling systems located directly above the racks. The design pipes liquid refrigerant to a heat exchanger, which blows cold air into the racks. Targeted cooling is more energy-efficient than room air conditioning because the chilled air must travel a much shorter distance to cool the load.

Energy efficiency is a big factor in Lehman Brothers' data centre designs. "The data centres we're building have a high focus on power and cooling," says Elias. His strategy includes the use of blades, virtualization, grid computing and multicore processors to reduce power, cooling and space demands.

Those technologies offer a one-time savings as servers are consolidated, but the underlying cause of the problem - compute density that's rising faster than energy efficiency gains - continues unabated. US Gartner analyst Michael Bell predicts that by 2008, half of all data centres will lack the power and cooling resources to meet the demand of higher-density computers.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Advanced Micro Devices Far EastEnvironmental Protection AgencyGartnerHewlett-Packard AustraliaHISHPIDC AustraliaWall Street

Show Comments
[]