Archive for the ‘Data Center’ Category

Call for Case Studies and Data Center Efficiency Projects

Wednesday, February 15th, 2012

As many of you know, I have chaired what has become known as the SVLG Data Center Efficiency Summit since the end of it’s first year’s program. That was fall of 2008. A wonderful summit held at Sun Microsystem’s Santa Clara campus. This has been a customer-focused, volunteer-driven project with case studies presented by end-users about their efficiency achievements. The goal is for all case studies to share actual results of the savings to show what works, best ways to improve efficiency and to provide ideas and support for all kinds of efficiency improvements within our data centers. We’ve highlighted software, hardware and infrastructure improvements, as well as new technologies and processes, in the effort that we all gain when we share. Through collaboration we all improve. And as an industry, if we all improve, we avoid over-regulation, we all help to preserve our precious energy supplies and keep their costs from escalating as quickly. We all help to reduce emissions generated as an industry and drive innovation. In essence, we all gain when we share ideas with each other.

As such, I have thought of this program to be immensely valuable as an industry tool to efficiency and improvement for all. Consequently, I have volunteered hundreds of hours of my time and forgiven personal financial gain to chair and help advance this program along with many other volunteers who have also given much of their time to advance this successful and valuable program. I do not have the resources to continually give of my volunteer time–I wish I did–but do hope to provide more support or time with future corporate sponsorship.

I do hope that you can participate in this valuable program and the corresponding event held in the late fall every year since 2008. Below is more information from the SVLG. You can also call me for more info.

Attention data center operators, IT managers, energy managers, engineers and vendors of green data center technologies: A call for case studies and demonstration projects is now open for the fifth annual Data Center Efficiency Summit to be held in November 2012.

The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and the Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users – not marketing pitches.

We welcome case studies presented by an end user or customer. If you are the vendor of an exciting new technology, please work with your customers to submit a case study. Case studies of built projects with actual performance data are preferred.

Topics to consider:
Energy Efficiency and/or Demand Response
Efficient Cooling (Example: Liquid Immersion Cooling)
Efficient Power Distribution (Example: DC Power)
IT Impact on Energy Efficiency (Example: Energy Impact of Data Security)
Energy Efficient Data Center Operations
In the final version of your case study, you will need to include:
Quantifiable savings in terms of kWh savings, percentage reduction in energy consumption, annual dollar savings for the data center, or CO2 reduction
Costs and ROI including all implementation costs with a breakdown (hardware, software, services, etc) and time horizon for savings
Description of site environment (age, size or load, production or R&D use)
List of any technology vendors or NGO partners associated with project
Please submit a short (1 page or less) statement of interest and description of your project or concept by March 2, 2012 to with subject heading: DCES12. Final case studies will need to be submitted in August 2012. Submissions will be reviewed and considered in the context of this event.
Interested in setting up a demonstration project at your facility? We may be able to provide technical support and independent evaluation. Please call Anne at 408-501-7871 for information.

The Olivier Sanche Tree and Room @ eBay

Sunday, September 18th, 2011

This week I had the pleasure of not only flying on 8 Southwest flights in one week–I believe this may be a new record for me of flights in one week on the same airline–but I also had the pleasure and privilege to tour ebay’s Topaz data center.

While we all know that I wouldn’t release any confidential data. Having been in the data center industry now for well over a decade, worked for Yahoo, Google, Sun, BEA, and completed large data center projects for financial institutions, banks, government entities, educational and research entities, Facebook, Equinix, and many others, I know and understand the importance and value to not only my reputation but also the importance of maintaining other’s confidential information. So, I will not share anything more about the data center—you can learn from what is already available from public sources.

However, I do want to comment on one item that I did see which does not have any confidentiality tied to it—the Olivier Sanche Memorial Tree and conference room. It touched me very much. Olivier and I were working on a project and talking just literally two days before he passed. Olivier and I were the exact same age. His job at Apple was essentially the same as mine at Yahoo. And at the time he passed, we were both running fast, traveling to many countries, several continents and states each month. We were trying to do everything we could to support our growing data center demand at the lowest cost and the highest energy efficiency as possible, and to help the industry achieve more as well by collaborating, sharing and guiding. And just as he touched my heart and those of many others in the data center industry, he managed to be the best dad possible.

While I enjoyed touring the ebay data center, it was the moment I spent reading Olivier’s memorial against the now small tree yet growing in size to eventually become a large icon in the entrance of this facility. It was that moment under this tree, and reading the memorial, that I once again remembered Olivier, and the touching reminder of how he touched many.

I applaud the fine folks for the very kind memorial to Olivier—we should all strive to support each other, work together, collaborate, and most of all, enjoy each other’s company. Not get out there and do something good today.

Data Center Site Selections need to be more comprehensive than they once were

Thursday, July 7th, 2011

Having completed site selections for many data centers and in about 20+ countries, including completing site selections for Yahoo, Google, Facebook, Equinix, Exodus, and many others, I’ve learned quite a few things. I’ve been part of the changing criteria evolving from just being near fiber lines to adding in power capacity, energy price, sales taxes, property taxes, and now including climate, carbon impact and water supply as well. I think we’ll soon see income tax added in to as a major cost driver for site selections. Having been doing this for over a decade, I’ve taken on these new elements of data center site selection, driving the focus on them. Nearly 10 years ago I considered power capacity, energy price, water supply, carbon intensity of power supply, climate and taxes, only to see the industry finally accepting all of these principals as primary decision factors.

While risk of natural and human disasters has always been a part of every data center site selection, it has seriously changed. My 20+ page checklist of hundreds of items from nearby man holes covers, flight paths and train tracks to nearest police station has not been as heavily used, as it seems to add less value than thinking about the BIG natural disasters that can occur and unforeseen human caused disasters. While we used to worry about trucks of guys jumping out with AK47’s to break into a data center, the reality is, this is a thin probability and one that is difficult to prevent. Meanwhile, the ones we can prevent that are known, potential and unforeseen, are the ones we have not focused on well.

For example, has anyone thought about their utility system being hacked and shut down for an extended period of time? Have you asked your electric utility if they are NERC CIP compliant to ensure that they have a much lower chance of being hacked and shut down? Have you thought about your electric utility meter, water meter, main switchboard and generator switchgear being connected to the Internet and/or your utilities and thus being able to be hacked into, shut down, or damaged?

And the main thing, how about natural disasters? As an industry, we’ve built data centers in seismically active areas (i.e. Japan, California, Oregon (also with extreme tsunami risks) and Washington) and build so the building stays up but don’t think about all of the IT gear shuffling about and the personnel getting hurt. A building that stands while the IT gear is rolling around like marbles isn’t a data center that will sustain an earthquake, only one that will memorialize that happened while we rebuild the inside.

We build data centers in hurricane and tornado areas (Texas, Kansas, Nebraska, North Carolina, South Carolina, Virginia, Georgia) and build for them pretty well, but do we think about what has not to come yet but likely will?

I’ve written before about the most dangerous seismic area in the US not being on the West Coast or even the East Coast, but yet the Madras Fault being right under Kansas City, St. Louis and a large part of middle America.

Lately we’ve had tremendous flooding along the Mississippi and Missouri Rivers; yesterday tremendous dust storms hit Arizona–look at these amazing photos of a dust storm that hit Phoenix yesterday. (,29307,2081646_2290849,00.html). And likely future heat storms will add to the dust storms in the Phoenix area. Do you want your data center operating in this?

Severe hurricane and tornado frequency has increased many fold over the last decade and we saw more serious renditions of each over the last several months, including in places we weren’t expecting to see them, such as Massachusetts and Missouri, where tornados tore thru very robust buildings, even a hospital data center ( Look at these photos of devasatation in Alabama from recent tornadoes: ( Imagine you and your employees following something like this–would they even come to work? One would likely need to shut down their data center for human resources even if everything kept working.

My point is not to overlook the seriousness of your data center site selection. Consider what MAY happen, with some probability, and don’t assume that just because it hasn’t happened, that it really hasn’t or won’t. Research the probabilities. The web is wonderful tool for this information, and so are your data center site selection experts at MegaWatt Consulting and others. Use us to help you avoid future problems.

Stay healthy and let’s help each other grow our industry. KC Mares

The Design of NCAR’s “Chillerless” data center with over 600 Watts/SF

Sunday, May 22nd, 2011

“Chiller-less”, “refrigeration-less”, and “compressor-less” designs have been something I have been striving for several years, with my testing and use of air-economized systems in data centers staring in 2002. In 2008-2009, I was lucky to join with Rumsey Engineers (now the Integral Group) as a consultant to work on data center projects. A fantastic experience, as Rumsey Engineers designs the most efficient mechanical systems of any team I know. In 2009, they believed that they had more Platinum LEED buildings than any engineering firm, and their numbers prove it.

Together in early 2009 we led a design Charrette for a new data center for the National Center for Atmospheric Research (NCAR), the folks who study climate data. As part of our design scope, we researched future generations of High-Performance Computing (HPC, aka supercomputer) equipment; it’s expected future energy use, load density and cooling system connections and inlet temperature requirements (some were air based, others water based). We looked at future generation of equipment as by the time the data center was built and the systems ordered and delivered, densities and cooling system connections would be different than today. This is a key point that we make with all of our projects: to look at what the hardware system needs will be several years from now, as it usually takes 1-2 years to build a data center, several years to fully load it, and we expect it to meet our operational needs for 10, 20 or more years. So, if the median of the data center’s life will be 7-15+ years away, than why would we design it to meet today’s computers? This is a mistake we see often in many people’s designs and site selections. Life changes, we must think ahead.

And this is why I research and pay attention to many cutting or leading edge technologies. Why I sit on boards of new and innovative technologies. This helps me see the future. And even though I was shocked to find future HPC systems had densities of over 2,500 Watt’s per square foot, I know that many computing systems of the future will use much lower densities than the average today, and there are always many technologies that we employ, not just one. Hence, we took a pragmatic approach to this analysis of future HPC systems and the needs of the leading researchers in climate change. (Incidentally, we also did an operating cost analysis of HPC systems that will come out between 2012 and 2014, and it yielded fairly broad cost differences, enough that first pass based upon compute performance would seem to lead to one system while just purchasing more of another system to get the same performance would still cost less, stressing the important point to always choose equipment that affords the lowest true total cost of ownership.)

Being that the site chosen for this data center was to be Cheyenne, Wyoming, a state with one the highest percentages of coal-generated electricity, energy efficiency in this design was essential. Although we were pretty certain we knew which type of mechanical system would be most energy efficient (and likely also lowest cost to build—they almost always go hand-in-hand when working pragmatically and holistically), we reviewed a rough design of several systems, including calculated annual PUE and a rough estimated build cost. We explored airside economization with 68F and 90F air a supply temps, the Kyoto cooling system (heat wheel), a modified heat wheel approach with economization, and waterside economization with 46F and 64F chilled supply water. Our modified heat wheel, and high supply temp air and water economized solutions did not require chillers, hence the temperatures as they were, as we pushed them until we did not require chillers. We choose the water economized system, which was our guess of the best system before we started any design analysis, as it provided 64F supply water, which was important as many HPC systems of the future will only run on chilled water and this temp is acceptable for the majority of the systems, and it also provided the lowest PUE of about 1.11 AND the lowest cost to build. This once again proves my motto that we build most efficient data centers at the lowest cost—the two seemingly disparate goals of capital cost and operating expense are once again aligned. Hence why we take a very pragmatic and holistic approach with an open mind to achieve the most.

This new 153,000 SF building designed to accommodate and secure the Scientific Computing Division’s (SCD) future in sustaining the computing initiatives and needs of UCAR’s scientific research constituents. Final design was based upon NCAR’s actual computing and data storage needs and a thorough review of future High Performance Computing (HPC) and storage technologies, leading to a 625 Watts/SF HPC space and a 250 Watts/SF medium density area. The data center is divided into two raised floor modules of 12,000 SF each with a separate data tape system area to reduce costs, increase efficiency and provide different temperature and humidity requirements than the HPC area. Also provided is a 16,000 SF office and visitor area heated by waste heat from the data center and a total facility capacity of 30 MVA.

Unique requirements of this high density, HPC data center were to also achieve ultra-high energy efficiency and LEED Silver certification for a modest construction budget. Various cooling options were analyzed, including Kyoto and other heat wheels, air economization, a creative solution of direct heat exchange with city water supply pipe and variations of water economized systems. Ultimately, LEED Gold certification and an annual operating PUE of about 1.14 is expected. This low of a PUE was thought to be impossible at the time of design (early 2009), especially for such high-density at TIER III. Through creative problem solving, the low PUE is obtained by designing a 9’ interstitial space above the raised floor combined with a 10’ waffle-grid raised floor to provide a low-pressure drop air recirculation system designed as part of the building. Ten day one chillers of 100 tons each provide supplemental cooling and optimum efficiency as load varies during hot summer months, while an indirect evaporative system with 96 fans in a fan wall provide ultra-low energy use cooling. An on site water supply tank, a total of nine standby generators at full build out of 2.5 MVA each, six 750 kVA UPS modules and other systems support the total low PUE and low construction budget for this high density HPC data center.

Here is a drawing of this data center now under construction:

Considering all of the vulnerabilities of data center sites

Thursday, May 5th, 2011

Where to hide your data center and protect it from damaging natural disasters?

I have built two data centers in the Raleigh, North Carolina. I traveled to Raleigh about once per month over a couple of years for these projects, many times driving in ice storms. It’s really quite fun to drive around when everything is coated in a sheet of ice. It’s like driving a Zamboni without an ice rink. Quite frankly, only people like me who have too much confidence in their driving abilities drive—everyone else stays home and for good reason as many cars are stuck on the roads and crashed up while driving in these conditions. Recently, storms in the Raleigh area caused a wide path of “death and damage” as reported here in the NY Times–declaring emergencies throughout North Caroline, Mississippi and Alabama. More extreme weather is predicted for the eastern seaboard with the ever-increasing climate change. Hurricane frequency and strength has increased several times over the last few years. Remember when one good hurricane a year was normal? Now it’s dozens, so much so, that the naming convention has changed completely from alphabetical names to female names to including male names and now numbered names similar to star systems.

Remember when California was the only place we expected to receive large earthquakes? Well, except for Japan, which reminded us once again of the devastation that can occur being along the Pacific Rim. I was in middle Baja following the recent Japan earthquake and had to change plans due to a tsunami warning from the Japan earthquake nearly 10,000 miles away, proving the point that near the ocean following an earthquake can be risky.

The largest earthquake in 35 years hits Arkansas…what you ask?! Arkansas? Yes, the largest in that state yet amongst more than 800 earthquakes in Arkansas since September 2010. Wow!! You can read more about it in this AP/Yahoo news article.

But even more spectacular–as I bring up earthquakes in Arkansas as merely an example–is that the largest risk of large-scale damage from an earthquake in the US is located right under the middle of the US, the New Madris Fault. Directly under Kentucky, Indiana, Illinois, Tenessee, Missippii, Arkansas, this baby is HUGE! With an ability to create horizontal acceleration of 1.89g, almost 5 times greater than the amount of ground acceleration at The Reno Technology Park near Reno, NV, which is located on stable ground absent of any earthquake faults. See this thesis on the affects of earthquakes on bridge design, which is the pinnacle of civil engineering for earthquakes, as they look at 75-year affects, not 20 as it is for most building construction. Even Texas is not immune to earthquakes. Having damaging earthquakes in 1882, 1891, 1917, 1925, 1931, 1932, 1936, 1948, 1951, 1957, 1964, 1966, 1969, and 1974. Many of these being felt as much as two states away from Texas, which covers a very large area. I type out all of these just to prove the point that even areas thought to be immune from damaging earthquakes have them, and more frequently than we care to remember. You can read more in this USGS article about Texas earthquakes here.

And thus the punch line is to consider data center site selection very carefully. Just because an earthquake has not happened for a long time does not mean that an area is not immune to a damaging earthquake. Check out this map of large earthquake potential and look at the two large circles of converging lines in the middle of the US and under South Carolina—these are the areas of greatest earthquake threat to public and buildings in the US:

How about volcanoes? Sure why worry unless you’re in the South Pacific, Hawaii or Costa Rica, right? Wrong. Over half of the world’s active volcanoes are in … did you guess…. The good ‘ole US of A. That’s right. Most of those are in Alaska as the Aleutian island chain is a pretty exciting place to be. And most of the them in the Continental US are located in Washington and Oregon. But guess what, the most exciting place in the US for a very damaging earthquake of proportions 1,000’s of times greater than the atomic bombs exploded on Japan to end Wrold War II? Wyoming. Yellowstone has been famous for Old Faithful. Heated by a geological hot spot, the same type that has created and is still creating the Hawaiian Islands. But new research calls it a supervolcano. Two of the larger eruptions from this supervolcano produced 2,500 times more ash than Mt St. Helens eruption in 1980, and that provided about 10’ of ash through eastern Washington and elsewhere. And this hot spot is getting hotter. Expected to impact Idaho, Wyoming and Montana with a greater frequency of earthquakes and a possible very large explosion that could wipe out a very large area. Read more about it here.

Why are these important to point out? Because we’ve designed and built data centers to withstand the impacts of what we EXPECT in a certain area, yet so many areas have more impacts than we imagined. Which leads me to site selection. Site selection isn’t so easy as to look at what has recently occurred or what we think might occur in an area; it should involve thorough research and understanding of what really are the risks over time and choose a site that best meets our risk tolerance/”comfort” during the life of the data center. And any risks should be reviewed, even those that seem unlikely, as we can see from many of these examples, that unlikely events can turn out to be devastating to any data center. Hence, location research is paramount to good site selection and these issues not overlooked. A good example is the over 20 active volcanoes in the Portland and Seattle area. Be aware of the risks in your decision or it could lead to a really bad day.