Menu
The Liquid Data Centre

The Liquid Data Centre

Energy costs are spiralling upward. Many data centre managers don't see that today, because their power use isn't metered separately and isn't part of the IT budget. As costs rise that's likely to change, forcing IT to retrofit data centres to the new reality.

Liquid cooling is coming to your data centre. It's not a matter of whether you want it or not. A migration from air to direct liquid cooling is simply the only option that can address surging data centre energy costs and allow the power densities of servers to continue to increase into the next decade. It will be too expensive not to adopt it. And it's coming sooner than you might think.

If it were up to engineers, direct liquid cooling would have been here five years ago, says 25-year IBM veteran Roger R. Schmidt, a distinguished engineer with experience designing water-cooled mainframes. He expects distributed systems to follow in the mainframe's footsteps.

Some data centre managers may not fully grasp the problem, because over the past eight years, server performance has increased by a factor of 75 while performance per watt of power has increased 16 times, according to Hewlett-Packard. But data centres aren't using fewer processors - they're using more than ever. Meanwhile, the power density of equipment has increased to the point where power and cooling systems vendor Liebert is supporting clients with state-of-the-art server racks exceeding 30 kilowatts (kW).

That creates two problems. First, energy costs are spiralling upward. Many data centre managers don't see that today, because their power use isn't metered separately and isn't part of the IT budget. As costs rise that's likely to change, forcing IT to retrofit data centres to the new reality.

Second, all that energy gets converted to heat. If you want to know what the heat coming off a 30kW rack feels like, turn your broiler oven on full blast and open the door. That's 3.4kW. Now imagine jamming nine broiler ovens, all running full tilt, into the confines of a single rack in your data centre and trying to maintain the internal temperature at or below 75 degrees (23.9 degrees Celsius). Dave Kelley, manager of environmental application engineering at Liebert, says current air-cooling technologies can perhaps handle racks in the "mid-30s". But equipment vendors say that 50kW racks could be a reality within five years.

Christian Belady, a distinguished engineer at HP, is passionate about educating data centre managers about the problem and establishing standards for liquid-cooled data centres. "If you look at the energy costs associated with not driving toward density and taking advantage of these densities, there will be huge penalties from an efficiency standpoint," Belady says.

But all that heat will have to be removed from the data centre, which is one reason why data centre infrastructure costs per server have risen. In fact, while the cost of server hardware has remained flat or declined slightly, Belady estimates that the cost of the data centre infrastructure to support a server over a three-year lifespan exceeded the hardware cost back in 2003. This year, the cost of energy (power and cooling) required per server, amortized over that same three years, has pulled even with the equipment cost. By 2008, it will surpass it, becoming the single largest component of server TCO.

That's where liquid cooling comes in. Direct cooling of servers by piping liquid refrigerant or chilled water directly to components within racks is far more efficient than using air and will become a requirement.

How soon? Kelley says his company has projects under way with IT equipment vendors that he can't discuss. But he predicts that "within a couple of years, somebody will have something where you can plug [a line containing liquid coolant] directly into a processor".

More efficient designs could substantially cut cooling costs, which today can account for more than half of data centre energy use. Best practices and optimizations of existing infrastructure can bring immediate savings. On racks approaching 30kW, users are turning to spot-cooling systems that run liquid refrigerant or chilled water to a heat exchanger that blows cool air from directly above or adjacent to server racks. That's more efficient than room air-conditioning units because the chilled air travels a shorter distance. These designs pipe liquid coolant, already used by computer room air-conditioning units at the outer edges of the data centre, up to the racks themselves. It's not hard to imagine extending those lines into the racks to deliver direct liquid cooling. The heat exchanger goes away, perhaps replaced in an IBM BladeCenter chassis with a hookup that accepts a chilled water or liquid refrigerant feed.

Today, spot-cooling systems typically require ad hoc copper piping overhead or under the floor to reach individual racks. As more and more racks require such cooling, data centre managers face a potential mess. What's worse, since few standards exist, things as basic as liquid coolant specifications and pipe couplings remain proprietary. Belady is pushing for common standards. "If we wait," he says, "everything is going to be much more proprietary, and when that happens, you lose the opportunity for interoperability."

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Hewlett-Packard AustraliaHISHPIBM AustraliaLiebert

Show Comments
[]