Chill Out? No Way, Green Grid Says

The Green Grid, a nonprofit organization dedicated to making IT infrastructures and data centers more energy-efficient, is making the case that data center operators are operating their facilities in too conservative a fashion.

Rather than rely on mechanical chillers, Green Grid argues in a new white paper, data centers can reduce power consumption via a higher inlet temperature of 20 degrees C.

Traditional data from the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) suggested that data center operators run their facilities in a recommended range of between 18 and 27 degrees Celsius, with a looser allowable range of between 10 and 35 degrees C. Many vendors already supported temperature ranges outside those limits, but the ASHRAE data represented a compromise between vendors, rather than actual thermal limits.

Green Grid originally recommended that data center operators build to the ASHRAE A2 specifications: 10 to 35 degrees C (dry-bulb temperature) and between 20 to 80 percent humidity. But the paper also presented data that a range of between 20 and 35 degrees C was acceptable.

Data centers have traditionally included chillers, mechanical cooling devices designed to lower the inlet temperature. Cooling the air, according to what the paper originally called anecdotal evidence, lowered the number of server failures that a data center experienced each year. But chilling the air also added additional costs, and PUE numbers would go up as a result.

To test whether or not server failures actually affected the rate of failure, Intel ran a proof of concept in a dry, temperate climate over a 10-month period, using 900 commercially available blade servers. The servers were air cooled, using ambient temperatures, and experienced swings in temperature, humidity, and air quality. But even then, Intel observed no significant increase in server failures.

The Green Grid paper concluded that many data centers can realize overall operational cost savings by leveraging these looser environmental controls within the wider range of supported temperature and humidity limits as established by equipment manufacturers. One of the easiest ways to do so, the paper concluded, was simply to build the data center in an area of lower ambient temperature.

Larger temperature tolerances allow data centers to be built in many more locations than previously thought, with the idea that natural air-based cooling via existing atmospheric conditions could replace mechanical cooling in most cases.

The Green Grid said that accommodating the A2 specifications would allow 75 percent of North American data centers to operate air-based cooling economizers over 8,500 hours per year, which totals 8,765 hours. In Europe, the greater tolerances would accommodate up to 99 percent of the available regions, with some areas of Spain and Western Ireland excluded.

The paper also established a baseline of normal failure rates: about 2 percent to 4 percent per server per year, based on an operating temperature of 20 degrees C—in other words, 20 to 40 servers failing annually in a 1,000-server data center.  Using 2011 ASHRAE data, Green Grid concluded that failure rates increase as inlet temperatures increase, and decrease as the temperature decreases—but only for that period of time. In other words, a few hot days would increase the possibility that servers would fail, but not as much as prolonged exposure.

The tradeoff to a higher inlet temperature is the need to run fans at a higher temperature to directly cool the servers. Other methods, such as evaporative cooling can be used. The paper noted that some server designs, such as 1U rack servers, are at a disadvantage in that they tend to be less efficient than larger devices when handling higher inlet temperatures because of the smaller size and higher rotational speed of the fans used within them. Blade servers are a far better design, Green Grid said.

Still, spinning fans at a higher speed is an acceptable tradeoff for lower power costs.

“It is important to note that any increase in total server energy consumption resulting from a rise in inlet temperature in an air-side economized data center is likely to prove inconsequential in comparison to the data center build and running costs associated with a restricted operating range and the use of mechanical chiller based cooling,” the paper notes.

“In other words, the incremental costs from operating an air-side-cooled data center, with occasional increased server energy consumption,” it added, “are almost certainly lower than the extra cost associated with using mechanical cooling to maintain a constant supply temperature when external ambient temperatures are toward the top of the range.”

The paper stops short of recommending chillers be completely eliminated, because of short-term heat waves. But the Green Grid data indicates that chillers should be given less priority in new data center build-outs.


Image: Green Grid

Post a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>