Menu
Researchers speed up the chase for cooler data centers

Researchers speed up the chase for cooler data centers

With energy costs rising and data centers at the core of IT strategy for many companies, cooling the growing number of computers jammed into data centers is an issue that has taken center stage.

Some innovative university researchers are focusing on cutting the cost of cooling the hot racks of servers in data centers. Last month, Syracuse University teamed with IBM to create one of the world's most efficient data centers on the school's campus, while the Georgia Institute of Technology announced last week that its faculty had created a 1,100-square-foot testing facility where researchers can test new cooling designs and measure the impact that the designs have on power efficiency.

[ For more data center news and expert advice on data center strategy, see CIO.com's Data Center Drilldown section. ]

The Georgia Tech researchers aim to analyze power consumption "all the way from the chip to the data center facility," says Yogendra Joshi, a professor of mechanical engineering at the university.

"We are addressing the inefficiencies at all scales," Joshi says. "Some researchers are looking at cooling at the chip level, some are looking at the cabinet level, and some are looking at the facilities level."

Two major trends in the data center sector are driving the interest in cooling. As the demand for data centers continues to rise, despite the down economy, Moore's Law-the prediction that processors will become twice as powerful every 18 months to 2 years-means that data centers will produce more heat. However, companies looking to build new data centers are finding resources increasingly scarce. Power is more expensive, and water for cooling is harder to come by.

"It is a key cost and a rising one," says Marion Howard Healy, an analyst focusing on data-center cooling for the Broad Group. "The increase in unstructured data means that storage costs are going up. And servers are becoming much more powerful, so (they) require more cooling then they used to."

Five years ago, a typical server rack, which is the size of a household refrigerator, produced between 1 and 5 kilowatts of heat. Today, typical server racks generate around 18 kilowatts, about as much as two average households. The trend towards hotter hardware will only continue: Manufacturers are working on cabinets containing higher-power chips that will produce three times as much heat, or about 60 kilowatts.

That could limit the types of cooling technology that could be used.

"We are getting to the point where you cannot do the cooling from air alone," Joshi says. "We want to do liquid cooling. You could certainly do a 60-kilowatt rack with liquid cooling."

The two trends mean that future data centers need to drastically reduce the cost of cooling to prevent it from overwhelming facility budgets. Typically, the energy required to cool the data center consumes 30 to 50 percent of the cost of running such facilities. In total, 60 percent of the cost of a data center relates to energy, Broad Group's Healy says. And, with more nations considering some form of carbon tax, companies should expect that figure to move higher.

"All of these things conspire to make sure that you are using your resources in the most efficient way," Healy says.

That's why more efficient cooling has become a key problem for information-technology companies. Different companies are tackling the problem in different ways. Intel has focused on more efficient processors and methods of cooling processors on chip. Server manufacturers are focusing on creating more compact machines that can be cooled efficiently. And facility architects are finding better configurations that save on cooling costs.

Georgia Tech's Joshi aims to reduce data center cooling costs by more than 15 percent. They are making good progress: The research group has found a way of configuring cabinets in the data center to increase air-cooling efficiency. Rather than long rows of server racks with hot air exiting the cabinets on one side and cool air entering on the other, Joshi and his colleagues found that four cabinets arranged in a plus formation, with cool air entering from the middle, works best.

"Just by changing the arrangement, you can get 20 to 30 percent lower energy costs," he said. "In some cases, it can be even more."

It's a holistic approach to tackling the cooling problem, and one that other researchers are following as well. Working with IBM, Syracuse University has embarked on a project to half the energy costs for its on-campus data center. Announced on May 29, the project will incorporate on-site power generation and a liquid cooling system that pumps chilled water to heat exchangers on the rear of the server cabinets.

"Energy use is becoming the largest single cost in operating data centers, with $2 billion per year wasted nationally dues to inefficiencies," Vijay Lund, vice president for development and manufacturing operations for IBM, said in a statement announcing the partnership.

Do you Tweet? Follow everything from CIO.com on Twitter @CIOonline.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags data centresdata centre cooling

More about Georgia Institute of TechnologyIBM AustraliaIntel

Show Comments
[]