Archive for the ‘Data Center’ Category

How to save on water costs in your data center

Sunday, April 15th, 2012

Two weeks ago I spoke at the Recycled Water Use and Outreach Workshop in Sacramento. I know what you’re asking, “why is a data center guy talking at a recycled water conference?” Well, funny that you asked.

First of all, most of my ultra-efficient designs use water for cooling, often indirect evaporative systems. Hence, we trade energy use for water use. Now water is far less costly than energy and often has a much lower carbon footprint and other environmental impact per unit of cooling than electricity. But it always is a bonus to use recycled water, as it has an even lower environmental impact than standard potable supply. Of course, all water IS recycled. There are only a finite number of water drops on this wonderful planet that sustains us and every one of them has been around the water cycle block at least a few times, so in essence, all water is recycled.

As we use water to help or entirely cool our data centers, water plays an even greater role in data centers to achieve the greatest efficiency. Hence, water quality, capacity, cost and reliability of service are just as important as any other valuable input into our system of operations, making these factors and the future cost of water even more important into our site selection decisions. I’ve seen water cost between $.10 to $10.00 per 1,000 gallons—wow! What a spread! And I’ve seen it increase at 40% rates per year! Wouldn’t it be nice to have a consistent price from a non-profit water system that YOU have control over and full visibility into all costs? And one that is built to meet the high-availability and quality standards for data centers, and is DEDICATED to data center use? That is what you get at the Reno Technology Park!

And it’s not just the supply but also the discharge of water. I learned much about water discharge challenges in Quincy, WA, when building the Yahoo! data center there, as the local water utility wanted Microsoft and Yahoo! to pony up $10-15 million to pay for a new water treatment plant to handle the QUANTITY of our discharge water. Our quality was fine, but the quantity was too much for the current systems. This led me to find solutions to reduce the cooling tower blow down and avoid this $10+ million unplanned cost to our project.

I’ve always been a fan of chemical-free water treatment systems, but when looking for new solutions to solve our problem, I came across WCTI, which makes a chemical-free system quite different than other systems, and could provide us a system to get the cycles of concentration up over 200!!! Yes, that is over 200 cycles of concentration, which means nearly zero blow down! Which means it lowers water consumption by 30-50% and avoidance of paying for a new water treatment plant for the city. And it’s truly chemical free (even no biocides), which means it’s safer for people and the environment, as well as much lower cost. Keep those chiller tubes and/or pipes clean!

This is one of the comprehensive solutions that we provide for our clients at MegaWatt Consulting. It’s about saving money, and water is just another critical part of our system. Reach out to us to learn more!

Call for Case Studies and Data Center Efficiency Projects

Wednesday, February 15th, 2012

As many of you know, I have chaired what has become known as the SVLG Data Center Efficiency Summit since the end of it’s first year’s program. That was fall of 2008. A wonderful summit held at Sun Microsystem’s Santa Clara campus. This has been a customer-focused, volunteer-driven project with case studies presented by end-users about their efficiency achievements. The goal is for all case studies to share actual results of the savings to show what works, best ways to improve efficiency and to provide ideas and support for all kinds of efficiency improvements within our data centers. We’ve highlighted software, hardware and infrastructure improvements, as well as new technologies and processes, in the effort that we all gain when we share. Through collaboration we all improve. And as an industry, if we all improve, we avoid over-regulation, we all help to preserve our precious energy supplies and keep their costs from escalating as quickly. We all help to reduce emissions generated as an industry and drive innovation. In essence, we all gain when we share ideas with each other.

As such, I have thought of this program to be immensely valuable as an industry tool to efficiency and improvement for all. Consequently, I have volunteered hundreds of hours of my time and forgiven personal financial gain to chair and help advance this program along with many other volunteers who have also given much of their time to advance this successful and valuable program. I do not have the resources to continually give of my volunteer time–I wish I did–but do hope to provide more support or time with future corporate sponsorship.

I do hope that you can participate in this valuable program and the corresponding event held in the late fall every year since 2008. Below is more information from the SVLG. You can also call me for more info.

Attention data center operators, IT managers, energy managers, engineers and vendors of green data center technologies: A call for case studies and demonstration projects is now open for the fifth annual Data Center Efficiency Summit to be held in November 2012.

The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and the Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users – not marketing pitches.

We welcome case studies presented by an end user or customer. If you are the vendor of an exciting new technology, please work with your customers to submit a case study. Case studies of built projects with actual performance data are preferred.

Topics to consider:
Energy Efficiency and/or Demand Response
Efficient Cooling (Example: Liquid Immersion Cooling)
Efficient Power Distribution (Example: DC Power)
IT Impact on Energy Efficiency (Example: Energy Impact of Data Security)
Energy Efficient Data Center Operations
In the final version of your case study, you will need to include:
Quantifiable savings in terms of kWh savings, percentage reduction in energy consumption, annual dollar savings for the data center, or CO2 reduction
Costs and ROI including all implementation costs with a breakdown (hardware, software, services, etc) and time horizon for savings
Description of site environment (age, size or load, production or R&D use)
List of any technology vendors or NGO partners associated with project
Please submit a short (1 page or less) statement of interest and description of your project or concept by March 2, 2012 to asmart@svlg.org with subject heading: DCES12. Final case studies will need to be submitted in August 2012. Submissions will be reviewed and considered in the context of this event.
Interested in setting up a demonstration project at your facility? We may be able to provide technical support and independent evaluation. Please call Anne at 408-501-7871 for information.

The Olivier Sanche Tree and Room @ eBay

Sunday, September 18th, 2011

This week I had the pleasure of not only flying on 8 Southwest flights in one week–I believe this may be a new record for me of flights in one week on the same airline–but I also had the pleasure and privilege to tour ebay’s Topaz data center.

While we all know that I wouldn’t release any confidential data. Having been in the data center industry now for well over a decade, worked for Yahoo, Google, Sun, BEA, and completed large data center projects for financial institutions, banks, government entities, educational and research entities, Facebook, Equinix, and many others, I know and understand the importance and value to not only my reputation but also the importance of maintaining other’s confidential information. So, I will not share anything more about the data center—you can learn from what is already available from public sources.

However, I do want to comment on one item that I did see which does not have any confidentiality tied to it—the Olivier Sanche Memorial Tree and conference room. It touched me very much. Olivier and I were working on a project and talking just literally two days before he passed. Olivier and I were the exact same age. His job at Apple was essentially the same as mine at Yahoo. And at the time he passed, we were both running fast, traveling to many countries, several continents and states each month. We were trying to do everything we could to support our growing data center demand at the lowest cost and the highest energy efficiency as possible, and to help the industry achieve more as well by collaborating, sharing and guiding. And just as he touched my heart and those of many others in the data center industry, he managed to be the best dad possible.

While I enjoyed touring the ebay data center, it was the moment I spent reading Olivier’s memorial against the now small tree yet growing in size to eventually become a large icon in the entrance of this facility. It was that moment under this tree, and reading the memorial, that I once again remembered Olivier, and the touching reminder of how he touched many.

I applaud the fine folks for the very kind memorial to Olivier—we should all strive to support each other, work together, collaborate, and most of all, enjoy each other’s company. Not get out there and do something good today.

Data Center Site Selections need to be more comprehensive than they once were

Thursday, July 7th, 2011

Having completed site selections for many data centers and in about 20+ countries, including completing site selections for Yahoo, Google, Facebook, Equinix, Exodus, and many others, I’ve learned quite a few things. I’ve been part of the changing criteria evolving from just being near fiber lines to adding in power capacity, energy price, sales taxes, property taxes, and now including climate, carbon impact and water supply as well. I think we’ll soon see income tax added in to as a major cost driver for site selections. Having been doing this for over a decade, I’ve taken on these new elements of data center site selection, driving the focus on them. Nearly 10 years ago I considered power capacity, energy price, water supply, carbon intensity of power supply, climate and taxes, only to see the industry finally accepting all of these principals as primary decision factors.

While risk of natural and human disasters has always been a part of every data center site selection, it has seriously changed. My 20+ page checklist of hundreds of items from nearby man holes covers, flight paths and train tracks to nearest police station has not been as heavily used, as it seems to add less value than thinking about the BIG natural disasters that can occur and unforeseen human caused disasters. While we used to worry about trucks of guys jumping out with AK47’s to break into a data center, the reality is, this is a thin probability and one that is difficult to prevent. Meanwhile, the ones we can prevent that are known, potential and unforeseen, are the ones we have not focused on well.

For example, has anyone thought about their utility system being hacked and shut down for an extended period of time? Have you asked your electric utility if they are NERC CIP compliant to ensure that they have a much lower chance of being hacked and shut down? Have you thought about your electric utility meter, water meter, main switchboard and generator switchgear being connected to the Internet and/or your utilities and thus being able to be hacked into, shut down, or damaged?

And the main thing, how about natural disasters? As an industry, we’ve built data centers in seismically active areas (i.e. Japan, California, Oregon (also with extreme tsunami risks) and Washington) and build so the building stays up but don’t think about all of the IT gear shuffling about and the personnel getting hurt. A building that stands while the IT gear is rolling around like marbles isn’t a data center that will sustain an earthquake, only one that will memorialize that happened while we rebuild the inside.

We build data centers in hurricane and tornado areas (Texas, Kansas, Nebraska, North Carolina, South Carolina, Virginia, Georgia) and build for them pretty well, but do we think about what has not to come yet but likely will?

I’ve written before about the most dangerous seismic area in the US not being on the West Coast or even the East Coast, but yet the Madras Fault being right under Kansas City, St. Louis and a large part of middle America.

Lately we’ve had tremendous flooding along the Mississippi and Missouri Rivers; yesterday tremendous dust storms hit Arizona–look at these amazing photos of a dust storm that hit Phoenix yesterday. (http://www.time.com/time/photogallery/0,29307,2081646_2290849,00.html). And likely future heat storms will add to the dust storms in the Phoenix area. Do you want your data center operating in this?

Severe hurricane and tornado frequency has increased many fold over the last decade and we saw more serious renditions of each over the last several months, including in places we weren’t expecting to see them, such as Massachusetts and Missouri, where tornados tore thru very robust buildings, even a hospital data center (http://www.datacenterdynamics.com/focus/archive/2011/06/missouri-tornado-destroys-hospital-data-center). Look at these photos of devasatation in Alabama from recent tornadoes: (http://www.nytimes.com/2011/05/05/us/05missing.html?_r=1&nl=todaysheadlines&emc=tha23). Imagine you and your employees following something like this–would they even come to work? One would likely need to shut down their data center for human resources even if everything kept working.

My point is not to overlook the seriousness of your data center site selection. Consider what MAY happen, with some probability, and don’t assume that just because it hasn’t happened, that it really hasn’t or won’t. Research the probabilities. The web is wonderful tool for this information, and so are your data center site selection experts at MegaWatt Consulting and others. Use us to help you avoid future problems.

Stay healthy and let’s help each other grow our industry. KC Mares

The Design of NCAR’s “Chillerless” data center with over 600 Watts/SF

Sunday, May 22nd, 2011

“Chiller-less”, “refrigeration-less”, and “compressor-less” designs have been something I have been striving for several years, with my testing and use of air-economized systems in data centers staring in 2002. In 2008-2009, I was lucky to join with Rumsey Engineers (now the Integral Group) as a consultant to work on data center projects. A fantastic experience, as Rumsey Engineers designs the most efficient mechanical systems of any team I know. In 2009, they believed that they had more Platinum LEED buildings than any engineering firm, and their numbers prove it.

Together in early 2009 we led a design Charrette for a new data center for the National Center for Atmospheric Research (NCAR), the folks who study climate data. As part of our design scope, we researched future generations of High-Performance Computing (HPC, aka supercomputer) equipment; it’s expected future energy use, load density and cooling system connections and inlet temperature requirements (some were air based, others water based). We looked at future generation of equipment as by the time the data center was built and the systems ordered and delivered, densities and cooling system connections would be different than today. This is a key point that we make with all of our projects: to look at what the hardware system needs will be several years from now, as it usually takes 1-2 years to build a data center, several years to fully load it, and we expect it to meet our operational needs for 10, 20 or more years. So, if the median of the data center’s life will be 7-15+ years away, than why would we design it to meet today’s computers? This is a mistake we see often in many people’s designs and site selections. Life changes, we must think ahead.

And this is why I research and pay attention to many cutting or leading edge technologies. Why I sit on boards of new and innovative technologies. This helps me see the future. And even though I was shocked to find future HPC systems had densities of over 2,500 Watt’s per square foot, I know that many computing systems of the future will use much lower densities than the average today, and there are always many technologies that we employ, not just one. Hence, we took a pragmatic approach to this analysis of future HPC systems and the needs of the leading researchers in climate change. (Incidentally, we also did an operating cost analysis of HPC systems that will come out between 2012 and 2014, and it yielded fairly broad cost differences, enough that first pass based upon compute performance would seem to lead to one system while just purchasing more of another system to get the same performance would still cost less, stressing the important point to always choose equipment that affords the lowest true total cost of ownership.)

Being that the site chosen for this data center was to be Cheyenne, Wyoming, a state with one the highest percentages of coal-generated electricity, energy efficiency in this design was essential. Although we were pretty certain we knew which type of mechanical system would be most energy efficient (and likely also lowest cost to build—they almost always go hand-in-hand when working pragmatically and holistically), we reviewed a rough design of several systems, including calculated annual PUE and a rough estimated build cost. We explored airside economization with 68F and 90F air a supply temps, the Kyoto cooling system (heat wheel), a modified heat wheel approach with economization, and waterside economization with 46F and 64F chilled supply water. Our modified heat wheel, and high supply temp air and water economized solutions did not require chillers, hence the temperatures as they were, as we pushed them until we did not require chillers. We choose the water economized system, which was our guess of the best system before we started any design analysis, as it provided 64F supply water, which was important as many HPC systems of the future will only run on chilled water and this temp is acceptable for the majority of the systems, and it also provided the lowest PUE of about 1.11 AND the lowest cost to build. This once again proves my motto that we build most efficient data centers at the lowest cost—the two seemingly disparate goals of capital cost and operating expense are once again aligned. Hence why we take a very pragmatic and holistic approach with an open mind to achieve the most.

This new 153,000 SF building designed to accommodate and secure the Scientific Computing Division’s (SCD) future in sustaining the computing initiatives and needs of UCAR’s scientific research constituents. Final design was based upon NCAR’s actual computing and data storage needs and a thorough review of future High Performance Computing (HPC) and storage technologies, leading to a 625 Watts/SF HPC space and a 250 Watts/SF medium density area. The data center is divided into two raised floor modules of 12,000 SF each with a separate data tape system area to reduce costs, increase efficiency and provide different temperature and humidity requirements than the HPC area. Also provided is a 16,000 SF office and visitor area heated by waste heat from the data center and a total facility capacity of 30 MVA.

Unique requirements of this high density, HPC data center were to also achieve ultra-high energy efficiency and LEED Silver certification for a modest construction budget. Various cooling options were analyzed, including Kyoto and other heat wheels, air economization, a creative solution of direct heat exchange with city water supply pipe and variations of water economized systems. Ultimately, LEED Gold certification and an annual operating PUE of about 1.14 is expected. This low of a PUE was thought to be impossible at the time of design (early 2009), especially for such high-density at TIER III. Through creative problem solving, the low PUE is obtained by designing a 9’ interstitial space above the raised floor combined with a 10’ waffle-grid raised floor to provide a low-pressure drop air recirculation system designed as part of the building. Ten day one chillers of 100 tons each provide supplemental cooling and optimum efficiency as load varies during hot summer months, while an indirect evaporative system with 96 fans in a fan wall provide ultra-low energy use cooling. An on site water supply tank, a total of nine standby generators at full build out of 2.5 MVA each, six 750 kVA UPS modules and other systems support the total low PUE and low construction budget for this high density HPC data center.

Here is a drawing of this data center now under construction: