How green is your data centre? If you don't care now, you will soon. Most data centre managers haven't noticed the steady increase in electricity costs, since in most cases they don't see those bills. But they do see the symptoms of surging power demands.
The benefits of [Moore's Law] are eroding as the costs of data centres rise dramatically
High-density servers are creating hot spots in data centres that have surpassed 30 kilowatts per rack for some high-end systems. As a result, some data centre managers are finding that they can't get enough power distributed out to those racks on the floor. Others are finding that they've maxed out the power utility's ability to deliver additional capacity to their location.
Ken Brill, founder and executive director of The Uptime Institute, sees the beginnings of a potential crisis.
"The benefits of [Moore's Law] are eroding as the costs of data centres rise dramatically," he says. Increasing demand for power is the culprit, driven by both higher power densities and strong growth in the number of servers in use.
Server electricity consumption in data centres has quietly doubled in the past five years, according to a study sponsored by Advanced Micro Devices that was conducted by John Koomey, a consulting professor at the US-based Stanford University and a staff scientist at the US Lawrence Berkeley National Laboratory.
Server performance is improving faster than energy efficiency is advancing. "If we're going to get energy efficiency rising faster than the rate of performance increase, we're going to have to do something radically different than what we're doing today," Brill says.
Fortunately, there are many steps that data centre managers can take to start reducing power consumption in existing data centres without making a huge investment — or sacrificing performance or availability.
1. Consolidate, consolidate, consolidate.
Consolidating servers is a good place to start. In many data centres, "between 10 percent and 30 percent of servers are dead and could be turned off", Brill says.
Removing one physical server from service saves $US560 annually in electricity costs, assuming a cost of 8 cents per kilowatt-hour, says Bogomio Balkansky, director of product marketing for Virtual Infrastructure 3 at VMware in the US.
Once idle servers have been removed, data centre managers should consider moving as many server-based applications as feasible into virtual machines. That allows IT to substantially reduce the number of physical servers required while increasing the utilization levels of remaining servers.
Most physical servers today run at about 10 percent to 15 percent utilization. Since an idle server can consume as much as 30 percent of the energy it uses at peak utilization, you get more bang for your energy buck by increasing utilization levels, says Balkansky.
To that end, VMware is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads among physical servers in a resource pool to maximize energy efficiency. Distributed Power Management will "squeeze virtual machines on as few physical machines as possible", Balkansky says, and then power down servers that aren't in use. It will make adjustments dynamically as workloads change. Workloads might be consolidated in the evening during off hours, for example, then reallocated across more physical machines in the morning, as activity increases.
2. Turn on power management.
Although power management tools are available, administrators don't always use them. "In a typical data centre, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chairman and chief scientist at the US-based Rocky Mountain Institute. Just taking full advantage of power management features and turning off unused servers can cut data centre energy requirements by about 20 percent, he adds.
That's not happening in many data centres today because administrators focus almost exclusively on uptime and performance and aren't comfortable with available power management tools, says Christian Belady, distinguished technologist at Hewlett-Packard. But turning on power management can actually increase reliability and uptime by reducing stresses on data centre power and cooling systems, he says.
Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager on AMD's server team. "Power management technology is not leveraged as much as it should be," Kerby says. "In Microsoft Windows, support is inherent, but you have to adjust the power scheme to take advantage of it." Instead, he says, that should be turned on by default.
You can realize significant savings by leveraging power management in the latest processors. With AMD's newest designs, "at 50 percent CPU utilization, you'll see a 65 percent saving in power. Even at 80 percent utilization, you'll see a 25 percent saving in power", just by turning on power management, says Kerby. Other chip makers are working on similar technologies.
But power management can cause more problems than it cures, says Jason William, chief technology officer at DigiTar, a messaging logistics service provider in the US. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and [power management] can cause some very screwy behaviours in the operating system," he says.
3. Upgrade to energy-efficient servers.
The first generation of multicore chip designs resulted in a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40 percent less power," says Lori Wigle, director of server technology and initiatives marketing at Intel. Moving to servers based on these designs should increase energy efficiency. (Future gains, however, are likely to be more limited. Sun Microsystems, Intel and AMD all say they expect power consumption to remain flat in the near term.)
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.