Menu
Powering Down

Powering Down

Electricity-hungry equipment, combined with rising energy prices, are devouring data centre budgets. Here's what you can do to get costs under control.

More Efficient Computers

Just as automakers built suvs when oil prices, computer manufacturers answered market demand for ever-faster and less expensive computers. Energy usage was considered less important than performance.

In a race to create the fastest processors, chip makers continually shrank the size of the transistors that make up the processors. The faster chips consumed more electricity, and at the same time allowed manufacturers to produce smaller servers that companies stacked in racks by the hundreds. In other words, companies could cram more computing power into smaller spaces.

Now that CIOs are beginning to care about energy costs, hardware makers are changing course. Silicon Valley equipment makers are now racing to capture the market for energy-efficient machines. Most chip makers are ramping up production of so-called dual-core processors, which are faster than traditional chips and yet use less energy. Among these new chips is Advanced Micro Devices' Opteron processor, which runs on 95 watts of power compared with 150 watts for Intel's Xeon chips. In March, Intel unveiled a design for more energy-efficient chips. Dubbed Woodcrest, these dual-core chips, which Intel says will be available this month, will require 40 percent less power while offering as much as 125 percent performance improvement over previous Intel chips.

"The manufacturers are getting better now," says Paul Froutan, VP of product engineering for Rackspace, which manages servers for clients in its five data centres. With more than 18,000 servers to watch over, Froutan has been worrying about energy costs for years. He's seen the company's power consumption more than double in the past 36 months, and in the same period has seen his total monthly energy bill rise five times to nearly $US300,000.

Latimer, who oversees Notre Dame's Centre for Research Computing, first appreciated the power consumption problem when the university decided to hire a hosting company to house its high-performance computers off campus. On-campus electrical costs associated with data centres have generally been rolled together with other facilities costs, and so the $US3000 monthly utility bill from the hosting company - for running a 512-node cluster of Xenon servers - came as a shock.

Notre Dame's provost recently called Latimer and other leaders together to talk about how to handle the increasing demands that a growing research program was beginning to place on the campus utility systems and infrastructure. Faculty members are requiring more space, greater electrical capacity and dedicated cooling for high-powered computers and other equipment such as MRI machines. Latimer's recent conversations with Intel, AMD, and other suppliers about his plans to buy new computer clusters "have been very focused on power consumption", he adds.

The Latest in Cooling

In September 2005, officials at Lawrence Livermore national laboratory switched on one of the world's most powerful supercomputers. The system, designed to simulate nuclear reactions and dubbed ASC Purple, drew so much power (close to 4.8 megawatts) that the local utility, Pacific Gas & Electric, called to see what was going on. "They asked us to let them know when we turn it off," says Mark Seager, assistant deputy head for advanced technology at Lawrence Livermore.

What's more, ASC Purple generates a lot of heat. And so, Seager and his colleagues are working on ways to cool it down more efficiently than turning up the air-conditioning. The lab is trying out new cooling units for ASC Purple and the lab's second supercomputer, BlueGene/L (which was designed with lower-powered IBM chips, but is nevertheless hot). Lawrence Livermore recently invested in a spray cooling system, an experimental method in which heat emitted by the computer is vaporized and then condensed away from the hardware. Seager says this new method, which holds the promise of eliminating air-conditioning units, would allow the lab to save up to 70 percent on its cooling costs.

It's not only supercomputers that create supersized cooling headaches. Tisdale, with NewEnergy Associates, says maintaining adequate and efficient cooling is one of the hardest problems to solve in the data centre. That's because as servers use more power, they produce more heat, forcing data centre managers to use more power to cool down the data centre. "You get hit with a double whammy on the cooling front," says Rackspace's Froutan.

To address the cooling dilemmas of more typical data centres, hardware makers such as Hewlett-Packard, IBM, Silicon Graphics and Egenera have offered or are coming out with liquid cooling options. Liquid cooling, which involves cooling air using chilled water, is an old method that is making a comeback because it's more efficient than air-conditioning.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Advanced Micro Devices Far EastAMDAmerican Power ConversionAPC by Schneider ElectricBillEgeneraGoogleHewlett-Packard AustraliaHISIBM AustraliaIDC AustraliaIntelPromiseSiemensSilicon GraphicsSpeedSun MicrosystemsSystems GroupXenon

Show Comments
[]