Is it possible, a data center PUE of 1.04, today?

I’ve been involved in the design and development of over $6 billion of data centers, maybe about $10 billion now, I lost count after $5 billion a few years ago, so I’ve seen a few things. One thing I do see in the data center industry is more or less, the same design over and over again. Yes, we push the envelope as an industry, yes, we do design some pretty cool stuff but rarely do we sit down with our client, the end-user, and ask them what they really need. They often tell us a certain Tier level, or availability they want, and the MWs of IT load to support, but what do they really need? Often everyone in the design charrette assumes what a data center should look like without really diving deep into what is important.

When we do that, we can get some very interesting results. For example, I’ve been fortunate to have been involved with the design of three data centers this year and all three we were able to push the envelope of design and ask some of these difficult questions. Rarely did I get the answers from the end-users I wanted to hear, where they really questioned the traditional thinking and what a data center should be and why, but we did get to some unconventional conclusions about what they needed instead of automatically assuming what they needed or wanted. As a consequence, we designed three data centers with low PUEs, or even what I like to call “ultra-low PUEs“, those below 1.10. The first was at 1.08, the next at 1.06 and now we have a 1.046, OK, let’s call it 1.05 since the other two are rounded up as well. (We know we can get that one down to about 1.04 with a few more tweaks to that “what is really needed” question.)

Now, I figured that a PUE of 1.05 was going to take a few years to get to because the hardware needed to improve, i.e. chillers, UPS, transformers, etc. But what I didn’t take into account was that when we really look at what the client needs, not wants, and what we can do to design for efficiency without jumping to the same old way of designing a data center, we can reach some great results. I assume that this principal can apply to almost anything in life.

Now, you ask, how did we get to a PUE of 1.05? Let me hopefully answer a few of your questions: 1) yes, based on annual hourly site weather data; 2) all three have densities of 400-500 watts/sf; 3) all three are roughly Tier III to Tier III+, so all have roughly N+1 (I explain a little more below); 4) all three are in climates that exceed 90F in summer; 5) none use a body of water to transfer heat (i.e. lake, river, etc); 6) all are roughly 10 MWs of IT load, so pretty normal size; 7) all operate within TC9.9 recommended ranges except for a few hours a year within the  allowable range; and most importantly, 8) all have construction budgets equal to or LESS than standard data center construction. Oh, and one more thing: even though each of these sites have some renewable energy generation, this is not counted in the PUE to reduce it; I don’t believe that is in the spirit of the metric.

Now, for some of the juicy details (email or call me for more or read future blog posts). We questioned what they thought a data center should be: how much redundancy did they really need? Could we exceed ASHRAE TC9.9 recommended or even allowable ranges? Did all the IT load really NEED to be on UPS? Was N+1 really needed during the few peak hours a year or could we get by with just N during those few peak hours each year and N+1 the rest of the year?, etc. The main point of this blog post is to say that low PUEs, like that of 1.05, can be achieved, yes, been there and done that now, for the same cost or LESS than a standard design, and done TODAY, saving millions of dollars per year in energy, millions of tons of CO2, millions of dollars of capital cost up front, less maintenance, etc. We just need to really dive deep as to what we need, not what we want or think we need, and we’ll be better at achieving great things. Now, I need to apply this concept to other parts of my life; how about you?

Tags: , , , , , , , ,

12 Responses to “Is it possible, a data center PUE of 1.04, today?”

  1. Cosme Garcia says:

    K.C., this one sentence stood out “Did all the IT load really NEED to be on UPS? ” I can assume a lot from this rhetorical question. With 2 powers supply standard and 3 common place, one could assume that you could place 1 feed on UPS, or segment your data center into critical and non- critical components. I am sure there is more. Would you elaborate?


    • KC Mares says:

      Hi Cosme. The question is far from rhetorical, and this is what I mean by assuming the same way of doing things. Many large internet players do not use UPS on all of their IT loads. You bring up two very good ways to not connect all to UPS. And think about it; if you’re UPS is 95% efficient, which is likely very optimistic, than 5% of your load is being wasted plus UPS cooling costs. If you have 10 MWs of IT load at $.10/kWh, that is almost $1 million per year, wasted, plus the battery replacement and initial capital cost. Let’s say that 1/2 the IT load is on UPS vs all, that is over $500k per year in OpEx (energy, battery replacement & AC) and roughly $2+ million extra in CapEx. These are very rough numbers off the top of my head when really tired, so actual numbers will vary. Reducing sags and surges isn’t all that valuable any more with today’s power supplies and most loads are redundant to each other anyways or acting as a group, especially with virtualization, so some units going off the once per year or less due to power outage and back on again in 15 seconds with generator isn’t such an issue for most apps & hardware, especially with load sharing and auto-transfer between redundant data centers.

  2. KC, very impressive. Are these values design values, or values measured after the site was commissioned? Or are they average operational values?

    If UPSs do not even play a role into the design, and cooling … is re-thought completely, does it make sense to talk about a Tier III site?



    • KC Mares says:

      Hi Rafael, these PUEs are calculated using hourly site weather data and a very in depth calculation by Rumsey Engineers, using hourly operations of equipment, their years of analysis work, site audits and electrical and mechanical equipment performance curves. These calculated results are likely to be very close to real world operations and will be compared once 12-months of usage is completed. People also question if PUEs this low are possible and with really good design, they can be, but every little detail matters. All of these sites mentioned do have UPSs, two with 100% of IT load on UPS and one with only 10% of site load in UPS, so really efficient UPSs are necessary to achieving these low PUEs. All three sites are at least N+1 for all critical equipment and the one with only a portion on UPS, it is also N+1; all meet the Tier III guidelines. Generally, as we add more equipment, PUEs and costs go up and site reliability can go down unless done well. We achieve a balance between low-capital cost, very low operational cost and good reliability by designing efficiency, we need less equipment such as chillers, pumps, etc., which inherently make the site more reliable. In all of these designs, we use little to no refrigeration most of the year and one of them none the year with really good economization techniques. I’m happy to set up a time with Rumsey Engineers to speak with folks about more of the details.

  3. Kelli Garner says:

    Really nice posts. I will be checking back here regularly.

  4. hello,

    Thank you for the great quality of your blog, each time i come here, i’m amazed.

    black hattitude.

  5. HEEL LIFTS says:

    Fantastic work. You have gained a new reader. I hope you keep up the good work and I await more of the same interesting posts.

  6. Very outstanding site.
    The info here is very important.

    I will invite my friends.


  7. I stumbled upon this page while surfing and I can say it’s worth the time I spent on it. Keep it up! You can be sure that I’ll be back for more.

  8. useful tips says:

    At last, I found this article once more. You have few useful tips for my school project. Now, I won’t forget to bookmark it. 🙂

  9. fix credit says:

    The information here is great. I will invite my friends here.


  10. Very nice Blog, I will tell my friends about it.