“Chiller-less”, “refrigeration-less”, and “compressor-less” designs have been something I have been striving for several years, with my testing and use of air-economized systems in data centers staring in 2002. In 2008-2009, I was lucky to join with Rumsey Engineers (now the Integral Group) as a consultant to work on data center projects. A fantastic experience, as Rumsey Engineers designs the most efficient mechanical systems of any team I know. In 2009, they believed that they had more Platinum LEED buildings than any engineering firm, and their numbers prove it.
Together in early 2009 we led a design Charrette for a new data center for the National Center for Atmospheric Research (NCAR), the folks who study climate data. As part of our design scope, we researched future generations of High-Performance Computing (HPC, aka supercomputer) equipment; it’s expected future energy use, load density and cooling system connections and inlet temperature requirements (some were air based, others water based). We looked at future generation of equipment as by the time the data center was built and the systems ordered and delivered, densities and cooling system connections would be different than today. This is a key point that we make with all of our projects: to look at what the hardware system needs will be several years from now, as it usually takes 1-2 years to build a data center, several years to fully load it, and we expect it to meet our operational needs for 10, 20 or more years. So, if the median of the data center’s life will be 7-15+ years away, than why would we design it to meet today’s computers? This is a mistake we see often in many people’s designs and site selections. Life changes, we must think ahead.
And this is why I research and pay attention to many cutting or leading edge technologies. Why I sit on boards of new and innovative technologies. This helps me see the future. And even though I was shocked to find future HPC systems had densities of over 2,500 Watt’s per square foot, I know that many computing systems of the future will use much lower densities than the average today, and there are always many technologies that we employ, not just one. Hence, we took a pragmatic approach to this analysis of future HPC systems and the needs of the leading researchers in climate change. (Incidentally, we also did an operating cost analysis of HPC systems that will come out between 2012 and 2014, and it yielded fairly broad cost differences, enough that first pass based upon compute performance would seem to lead to one system while just purchasing more of another system to get the same performance would still cost less, stressing the important point to always choose equipment that affords the lowest true total cost of ownership.)
Being that the site chosen for this data center was to be Cheyenne, Wyoming, a state with one the highest percentages of coal-generated electricity, energy efficiency in this design was essential. Although we were pretty certain we knew which type of mechanical system would be most energy efficient (and likely also lowest cost to build—they almost always go hand-in-hand when working pragmatically and holistically), we reviewed a rough design of several systems, including calculated annual PUE and a rough estimated build cost. We explored airside economization with 68F and 90F air a supply temps, the Kyoto cooling system (heat wheel), a modified heat wheel approach with economization, and waterside economization with 46F and 64F chilled supply water. Our modified heat wheel, and high supply temp air and water economized solutions did not require chillers, hence the temperatures as they were, as we pushed them until we did not require chillers. We choose the water economized system, which was our guess of the best system before we started any design analysis, as it provided 64F supply water, which was important as many HPC systems of the future will only run on chilled water and this temp is acceptable for the majority of the systems, and it also provided the lowest PUE of about 1.11 AND the lowest cost to build. This once again proves my motto that we build most efficient data centers at the lowest cost—the two seemingly disparate goals of capital cost and operating expense are once again aligned. Hence why we take a very pragmatic and holistic approach with an open mind to achieve the most.
This new 153,000 SF building designed to accommodate and secure the Scientific Computing Division’s (SCD) future in sustaining the computing initiatives and needs of UCAR’s scientific research constituents. Final design was based upon NCAR’s actual computing and data storage needs and a thorough review of future High Performance Computing (HPC) and storage technologies, leading to a 625 Watts/SF HPC space and a 250 Watts/SF medium density area. The data center is divided into two raised floor modules of 12,000 SF each with a separate data tape system area to reduce costs, increase efficiency and provide different temperature and humidity requirements than the HPC area. Also provided is a 16,000 SF office and visitor area heated by waste heat from the data center and a total facility capacity of 30 MVA.
Unique requirements of this high density, HPC data center were to also achieve ultra-high energy efficiency and LEED Silver certification for a modest construction budget. Various cooling options were analyzed, including Kyoto and other heat wheels, air economization, a creative solution of direct heat exchange with city water supply pipe and variations of water economized systems. Ultimately, LEED Gold certification and an annual operating PUE of about 1.14 is expected. This low of a PUE was thought to be impossible at the time of design (early 2009), especially for such high-density at TIER III. Through creative problem solving, the low PUE is obtained by designing a 9’ interstitial space above the raised floor combined with a 10’ waffle-grid raised floor to provide a low-pressure drop air recirculation system designed as part of the building. Ten day one chillers of 100 tons each provide supplemental cooling and optimum efficiency as load varies during hot summer months, while an indirect evaporative system with 96 fans in a fan wall provide ultra-low energy use cooling. An on site water supply tank, a total of nine standby generators at full build out of 2.5 MVA each, six 750 kVA UPS modules and other systems support the total low PUE and low construction budget for this high density HPC data center.