Archive for May, 2011

The Design of NCAR’s “Chillerless” data center with over 600 Watts/SF

Sunday, May 22nd, 2011

“Chiller-less”, “refrigeration-less”, and “compressor-less” designs have been something I have been striving for several years, with my testing and use of air-economized systems in data centers staring in 2002. In 2008-2009, I was lucky to join with Rumsey Engineers (now the Integral Group) as a consultant to work on data center projects. A fantastic experience, as Rumsey Engineers designs the most efficient mechanical systems of any team I know. In 2009, they believed that they had more Platinum LEED buildings than any engineering firm, and their numbers prove it.

Together in early 2009 we led a design Charrette for a new data center for the National Center for Atmospheric Research (NCAR), the folks who study climate data. As part of our design scope, we researched future generations of High-Performance Computing (HPC, aka supercomputer) equipment; it’s expected future energy use, load density and cooling system connections and inlet temperature requirements (some were air based, others water based). We looked at future generation of equipment as by the time the data center was built and the systems ordered and delivered, densities and cooling system connections would be different than today. This is a key point that we make with all of our projects: to look at what the hardware system needs will be several years from now, as it usually takes 1-2 years to build a data center, several years to fully load it, and we expect it to meet our operational needs for 10, 20 or more years. So, if the median of the data center’s life will be 7-15+ years away, than why would we design it to meet today’s computers? This is a mistake we see often in many people’s designs and site selections. Life changes, we must think ahead.

And this is why I research and pay attention to many cutting or leading edge technologies. Why I sit on boards of new and innovative technologies. This helps me see the future. And even though I was shocked to find future HPC systems had densities of over 2,500 Watt’s per square foot, I know that many computing systems of the future will use much lower densities than the average today, and there are always many technologies that we employ, not just one. Hence, we took a pragmatic approach to this analysis of future HPC systems and the needs of the leading researchers in climate change. (Incidentally, we also did an operating cost analysis of HPC systems that will come out between 2012 and 2014, and it yielded fairly broad cost differences, enough that first pass based upon compute performance would seem to lead to one system while just purchasing more of another system to get the same performance would still cost less, stressing the important point to always choose equipment that affords the lowest true total cost of ownership.)

Being that the site chosen for this data center was to be Cheyenne, Wyoming, a state with one the highest percentages of coal-generated electricity, energy efficiency in this design was essential. Although we were pretty certain we knew which type of mechanical system would be most energy efficient (and likely also lowest cost to build—they almost always go hand-in-hand when working pragmatically and holistically), we reviewed a rough design of several systems, including calculated annual PUE and a rough estimated build cost. We explored airside economization with 68F and 90F air a supply temps, the Kyoto cooling system (heat wheel), a modified heat wheel approach with economization, and waterside economization with 46F and 64F chilled supply water. Our modified heat wheel, and high supply temp air and water economized solutions did not require chillers, hence the temperatures as they were, as we pushed them until we did not require chillers. We choose the water economized system, which was our guess of the best system before we started any design analysis, as it provided 64F supply water, which was important as many HPC systems of the future will only run on chilled water and this temp is acceptable for the majority of the systems, and it also provided the lowest PUE of about 1.11 AND the lowest cost to build. This once again proves my motto that we build most efficient data centers at the lowest cost—the two seemingly disparate goals of capital cost and operating expense are once again aligned. Hence why we take a very pragmatic and holistic approach with an open mind to achieve the most.

This new 153,000 SF building designed to accommodate and secure the Scientific Computing Division’s (SCD) future in sustaining the computing initiatives and needs of UCAR’s scientific research constituents. Final design was based upon NCAR’s actual computing and data storage needs and a thorough review of future High Performance Computing (HPC) and storage technologies, leading to a 625 Watts/SF HPC space and a 250 Watts/SF medium density area. The data center is divided into two raised floor modules of 12,000 SF each with a separate data tape system area to reduce costs, increase efficiency and provide different temperature and humidity requirements than the HPC area. Also provided is a 16,000 SF office and visitor area heated by waste heat from the data center and a total facility capacity of 30 MVA.

Unique requirements of this high density, HPC data center were to also achieve ultra-high energy efficiency and LEED Silver certification for a modest construction budget. Various cooling options were analyzed, including Kyoto and other heat wheels, air economization, a creative solution of direct heat exchange with city water supply pipe and variations of water economized systems. Ultimately, LEED Gold certification and an annual operating PUE of about 1.14 is expected. This low of a PUE was thought to be impossible at the time of design (early 2009), especially for such high-density at TIER III. Through creative problem solving, the low PUE is obtained by designing a 9’ interstitial space above the raised floor combined with a 10’ waffle-grid raised floor to provide a low-pressure drop air recirculation system designed as part of the building. Ten day one chillers of 100 tons each provide supplemental cooling and optimum efficiency as load varies during hot summer months, while an indirect evaporative system with 96 fans in a fan wall provide ultra-low energy use cooling. An on site water supply tank, a total of nine standby generators at full build out of 2.5 MVA each, six 750 kVA UPS modules and other systems support the total low PUE and low construction budget for this high density HPC data center.

Here is a drawing of this data center now under construction:

Considering all of the vulnerabilities of data center sites

Thursday, May 5th, 2011

Where to hide your data center and protect it from damaging natural disasters?

I have built two data centers in the Raleigh, North Carolina. I traveled to Raleigh about once per month over a couple of years for these projects, many times driving in ice storms. It’s really quite fun to drive around when everything is coated in a sheet of ice. It’s like driving a Zamboni without an ice rink. Quite frankly, only people like me who have too much confidence in their driving abilities drive—everyone else stays home and for good reason as many cars are stuck on the roads and crashed up while driving in these conditions. Recently, storms in the Raleigh area caused a wide path of “death and damage” as reported here in the NY Times–declaring emergencies throughout North Caroline, Mississippi and Alabama. More extreme weather is predicted for the eastern seaboard with the ever-increasing climate change. Hurricane frequency and strength has increased several times over the last few years. Remember when one good hurricane a year was normal? Now it’s dozens, so much so, that the naming convention has changed completely from alphabetical names to female names to including male names and now numbered names similar to star systems.

Remember when California was the only place we expected to receive large earthquakes? Well, except for Japan, which reminded us once again of the devastation that can occur being along the Pacific Rim. I was in middle Baja following the recent Japan earthquake and had to change plans due to a tsunami warning from the Japan earthquake nearly 10,000 miles away, proving the point that near the ocean following an earthquake can be risky.

The largest earthquake in 35 years hits Arkansas…what you ask?! Arkansas? Yes, the largest in that state yet amongst more than 800 earthquakes in Arkansas since September 2010. Wow!! You can read more about it in this AP/Yahoo news article.

But even more spectacular–as I bring up earthquakes in Arkansas as merely an example–is that the largest risk of large-scale damage from an earthquake in the US is located right under the middle of the US, the New Madris Fault. Directly under Kentucky, Indiana, Illinois, Tenessee, Missippii, Arkansas, this baby is HUGE! With an ability to create horizontal acceleration of 1.89g, almost 5 times greater than the amount of ground acceleration at The Reno Technology Park near Reno, NV, which is located on stable ground absent of any earthquake faults. See this thesis on the affects of earthquakes on bridge design, which is the pinnacle of civil engineering for earthquakes, as they look at 75-year affects, not 20 as it is for most building construction. Even Texas is not immune to earthquakes. Having damaging earthquakes in 1882, 1891, 1917, 1925, 1931, 1932, 1936, 1948, 1951, 1957, 1964, 1966, 1969, and 1974. Many of these being felt as much as two states away from Texas, which covers a very large area. I type out all of these just to prove the point that even areas thought to be immune from damaging earthquakes have them, and more frequently than we care to remember. You can read more in this USGS article about Texas earthquakes here.

And thus the punch line is to consider data center site selection very carefully. Just because an earthquake has not happened for a long time does not mean that an area is not immune to a damaging earthquake. Check out this map of large earthquake potential and look at the two large circles of converging lines in the middle of the US and under South Carolina—these are the areas of greatest earthquake threat to public and buildings in the US:

How about volcanoes? Sure why worry unless you’re in the South Pacific, Hawaii or Costa Rica, right? Wrong. Over half of the world’s active volcanoes are in … did you guess…. The good ‘ole US of A. That’s right. Most of those are in Alaska as the Aleutian island chain is a pretty exciting place to be. And most of the them in the Continental US are located in Washington and Oregon. But guess what, the most exciting place in the US for a very damaging earthquake of proportions 1,000’s of times greater than the atomic bombs exploded on Japan to end Wrold War II? Wyoming. Yellowstone has been famous for Old Faithful. Heated by a geological hot spot, the same type that has created and is still creating the Hawaiian Islands. But new research calls it a supervolcano. Two of the larger eruptions from this supervolcano produced 2,500 times more ash than Mt St. Helens eruption in 1980, and that provided about 10’ of ash through eastern Washington and elsewhere. And this hot spot is getting hotter. Expected to impact Idaho, Wyoming and Montana with a greater frequency of earthquakes and a possible very large explosion that could wipe out a very large area. Read more about it here.

Why are these important to point out? Because we’ve designed and built data centers to withstand the impacts of what we EXPECT in a certain area, yet so many areas have more impacts than we imagined. Which leads me to site selection. Site selection isn’t so easy as to look at what has recently occurred or what we think might occur in an area; it should involve thorough research and understanding of what really are the risks over time and choose a site that best meets our risk tolerance/”comfort” during the life of the data center. And any risks should be reviewed, even those that seem unlikely, as we can see from many of these examples, that unlikely events can turn out to be devastating to any data center. Hence, location research is paramount to good site selection and these issues not overlooked. A good example is the over 20 active volcanoes in the Portland and Seattle area. Be aware of the risks in your decision or it could lead to a really bad day.