Posts Tagged ‘Energy Efficiency’

We learn our skills in and out of the work place

Monday, September 19th, 2011

From time to time, I write a little about non-data center or energy things, just to mix it up and share with folks. Sometimes it is these blogs that generate the most interest and conversation from folks. Plus, I do believe that since we all work together, it’s nice to share some of our personal life with each other. After all, we are people working together based upon relationships, it is these things in our personal lives that drives us to work hard, and thus, they are essential parts of who we are as people and consequently, these personal things affect our daily work lives and relationships.

I also find that many of the things that I do in my personal life influence my work life. I’m sure we all find that at times, we reach an epiphany when walking the dog, talking to our spouse or friends, or some other activity that drives a decision or direction in our work the next day. I had one two weeks ago when talking with friends over dinner. But, that is not the topic of this blog.

Instead, it goes back another week but really starts when I was in college. I have always liked to push myself physically, and I get a lot out of those endorphins from a good physical challenge but also one with a mental challenge.

So I started mountain biking in college, riding longer and longer, more often and more often, until I was riding 365 days per year and training about 30+ hours per week. That on top of my 7-8 course load each semester (a consequence of earning multiple degrees simultaneously) and working part to full time year around. What can I say, I like to stay busy (also was on sports teams in addition to cycling, several clubs, an RA, etc, etc).

I then turned this “hobby” into training for races, became sponsored (it took me years to finish all those boxes of PowerBars I was provided), and finished races often in the top 10 out of hundreds or thousands of finishers. I earned enough points in my last year of racing and college to be in the top 10 nationally.

However, this, like many other hobbies, wasn’t my calling for a profession, and often hobbies and professions don’t mix very well. But I still get out to ride as often as I can and still love it. And do a race or two each year, purely for fun but also competitive. So on August 27th & 28th, I completed another 24-hour mountain bike race. I believe this is around my 6th, but I can’t remember nor have I been keeping track.

People ask how a 24 hour mountain bike race performs. Well, you ride a lap, usually about 10-15 miles long–which is usually takes about 45-90 minutes to finish–all on dirt, often much single track, climbs, descents, technical sections, fast sections, and complete as many laps as possible in 24 hours. Races can be completed as a solo team, or with up to 5 people on a team, trading off each lap in rotation, making each lap an all out sprint, then resting, downing as much water as your body can absorb, repairing your bike, recharging light batteries and trying to eat and sleep in the 45 min to 3 hour rest time before the next lap. Usually races start at about noon and end at noon the next day. Powerful bike light systems are used in the night laps, and the key is efficiency and speed while staying upright. Crashes not only hurt people–broken bones are quite common and sometimes trips in an ambulance for those racers that push their speed too fast for their ability at the time. Ability changes much after hours of riding, little sleep, little food, dehydration, and tired bodies. And especially at night when visibility is limited to a spot of light 5-20 feet in front of you as speeds exceed 30 mph in faster downhill and flat sections with still plenty of rocks, ditches and other obstacles to avoid.

The key to these races is to manage energy and speed to skill. Those that push too hard in the beginning of the race (a common mistake) or on any lap typically burn out before the race ends and either can’t finish it (often just finishing the race allows one to move up in the score board) or get hurt along the way.

So the key to 24-hour mountain bike racing is maintaining energy for 24 hours of riding with little sleep. It becomes somewhat of a mental game, especially in the late night laps. But even more so, a continual focus on the efficiency of every single pedal stroke–all 100,000 of them–and on the rest of the body, especially the lungs and heart. One must constantly “economize” while pushing their bike and self as hard (and consequently fast) as possible up every hill, down every descent, and around every lap to maintain the fastest average lap time. So any one slow lap kills the average, and hence, efficiency with the greatest speed. My lap times varied by less than 10%, even though temperatures ranged by 50 degrees F, some were in full sun, some in full dark; some with heavy traffic of other racers, some with passing  another racer only every 15 minutes; some with full energy, and last lap with maybe an hour of sleep over 24 hours, little food, and likely mild dehydration and most certainly tired legs and body, and even one with a mechanical and another with a flat tire.

In the data centers I design, efficiency doesn’t change much between hot and cold weather, day and night, packed full or empty of servers, mechanical failures or perfect operations. The key is being as efficient as possible all the time, not matter the adversity. It’s all about economizing and energy efficiency, just as my continuous focus in designing and operating data centers. I love it!

Here is a video of my most recent 24-hour race, the Coolest 24-Hours, which took place end of August in Soda Springs, CA (Donner Summit area of the Sierras). The race raised money for those dealing with cancer. In this video, I am the first rider out of the start of the 24 hour racers, wearing silver jersey, black and yellow cycling shorts with USD on the side (I still fit in my college cycling team shorts almost two decades later), red single speed 29″ Niner bike. I enjoyed being in first place for about the first mile before some of the racers pass me. You can see me do a little jump off the pavement start onto the dirt and also my buddy and fellow racer Stewart do the same in his third place position with red & white Niner Bikes jersey. I posted a photo of my aunt along the course–who died of cancer not long ago–which many photos of survivors and victims can be seen staked in the ground at the first turn. I finished the race with a smile, a dirty face, a dusty body, a respectable finish, and another lesson in efficiency. Enjoy the video and getting out to learn more! Here is the video: The Coolest 24 Hours, 2011–KC leads the pack at the start of the 24 hour race

The Data Center Vibratation Penalty to Storage Performance

Thursday, June 10th, 2010

Every now and then a really great way to reduce energy use comes along that is so simple we all whack our head wondering, “why didn’t I think of that!” My principles of achieving ultra-efficient data centers (PUEs between 1.03-1.08; I call anything less than 1.10 ultra-efficient) are based upon simplicity and a holistic approach while meeting the need not the want or convention. Generally the simpler the better, as simple is always lower cost up front and ongoing, as well as easier to maintain, more reliable and more efficient.

So, here is one that will not catch you by surprise: a rack that saves energy. We’ve all heard of passive and active cooling racks: those with fans, heat exchangers or direct cooling systems. I explored some of the front & rear door heat exchanger racks back in 2003, which work really well for high-density applications but can be very expensive compared to better-designed data center cooling systems. But how about a rack that not only reduces energy costs but also improves hardware performance?

I’ve had the pleasure of exploring with Green Platform’s CEO, Gus Malek-Madani, their anti-vibration rack (“AVR”), a carbon-fiber composite rack actually designed to remove vibration. Why remove vibration? Green Platform claims that a typical datacenter experiences vibration levels of around 0.2 G-Root Mean Square (GRMS); this, it claims can degrade a disk drive’s performance (both I/O and throughput) by up to 66%; a fact that was borne out during a ‘rigorous’ testing exercise it did in conjunction with Sun Microsystems.

As harddrives get “larger” in capacity, bits get crammed into a smaller space. This along with smaller drives force tolerances between rotating platters and the movement of mechanical actuator arms within the drives to get tighter, and thus, vibration causes drives to slow down or have higher mis-read & writes, slowing down I/O performance. “As a result of this ‘vibration penalty,’ the company believes that up to a third of all US datacenter spending – on both hardware and power – is wasted on vibration, amounting to some $32bn of wastage. The company also says there’s evidence that reducing the impact of vibration will serve to improve the reliability of drives (and improving mean time between failure.)”

In order to back up this figure, early tests with Sun Microsystems (pre-Oracle) and Q Associates (“Effects of Data Center Vibration on Compute System Performance” by Julian Turner) showed IOPS improvement of up to 247% in random I/O. The following chart shows this storage performance degradation:



You can also watch the following video that clearly shows that just yelling into the face of storage hardware causes a very visible degradation of storage performance: http://www.youtube.com/watch?v=tDacjrSCeq4

If the vibration from yelling into a rack causes performance degradation, think about the vibration affects from HVAC systems, thousands of server fans, and even walking thru your data center.

The company says its carbon-composite design massively reduces the vibrations that can cripple hard disk drive performance, boosting performance, efficiency and even reliability. From the results of these tests stated above, they assume that most folks should see a 100% improvement in storage throughput, 50% shorter job times and consequently, 50% less power consumed per job. In testing with Sun Microsystems the AVP dissipated vibration by a factor of 10x to 1000x. In further testing with systems integrator Q-Associates, which pitted the AVP against a regular steel rack – it found that random read IOPS increased by between 56% and 246%, with random write IOPS showing a 34% to 88% improvement with the AVP.  

“The throughput and I/O rate of storage remains a significant performance bottleneck within the datacenter; though hard disk drive (HDD) capacities have increased by several orders of magnitude in the last 30 years, drive performance has improved by a much smaller factor. This issue is exacerbated by the fact that server performance, driven by Moore’s law, has increased massively, to the extent that there’s now a server-storage performance gap. The way most datacenters engineer around this problem is inefficient; typically workloads are striped across many disks in parallel, and disks are ‘short stroked’ — i.e. data is only written to the edge of platters – in order to minimize latency. Although this does address performance, the trade-off is that disk capacity is massively underutilized, wasting datacenter space and energy, not to mention the cost of reliably maintaining an unnecessarily large disk estate.”

In the many data centers I have had the pleasure of working in lately, storage is growing faster than server capacity and the greatest performance limitation is storage throughput. This product works for the high-end video/audio and scientific markets; a niche space where another of Malek-Madani’s company– Composite Products, LLC — is focused. The test results clearly show storage throughput dramatically improved by reducing vibration at the rack of storage hardware. With some 3 million storage racks currently in use inside datacenters worldwide, and growing by the second to probably eventually exceed server racks, this is a very large opportunity to improve performance while reducing energy use, always one of my main mantras. Green Platform expects to have their racks as an option from storage vendors, NetApp, EMC, and others, so that as you purchase and provision new storage systems, you pay a small incremental increase in price of the storage system for a very large improvement in performance and energy reduction. Think of all of those servers waiting so much less for data throughput and how much that can improve the utilization of those systems? Think about it.

I’m looking to conduct an end-user test with their rack; contact me if you’re interested so we can determine results for your organization.

Can we replace UPSs in our data centers?

Tuesday, April 27th, 2010

It has been common since I entered the data center realm 15 years ago that a data center had Uninterruptible Power Supplies (UPS) feeding all computer equipment or other critical loads. The UPS did two things: 1) kept the power flowing from batteries in the UPSs for a short duration until generators came on, utility power was restored computer equipment could be shut down; and 2) kept voltage and frequency stable for the computer load while the utility (or generator) power fluctuated, known as sags or surges. However, UPSs consume about 5-15% if the power entering them as losses in the units (a.k.a inefficiency). So if IT load equals 1 MW, UPS power will be about 1.1 MWs with the additional 100 kW lost as heat, which then requires additional cooling to keep at the roughly 75F temperature batteries and UPS run best. Here is a photo of some UPS systems:ups

Now, enter 2010. UPSs are still assumed by nearly every data center engineer and operator to be needed or required, yet, power electronics within the computer equipment can ride thru just about any voltage sag or surge a utility would pass on thru their protective equipment. Computer equipment power supplies have been rated for 100-240VAC and 50-60 Hertz for about 10 years now, so a far greater range than an utility will likely every pass on. Furthermore, due to capacitors in the power supplies, these devices can ride thru complete outages of about 15+ cycles, which is roughly 1/4 second. So the UPSs job is really now only to provide ride thru of outages over 1/4 second and until a generator comes on or as needed by the operation.

In many of the data center design charrettes that I have been part of over the last few years, we ask the users what really needs to be on UPS, avoiding the assumption that all computer load must be on UPS. Once we dive into the operations, we always come back with an answer from the data center operators that only a portion of the computer load needs to be on UPS and the rest can go down during a usually irregular utility outage. The reason is that these computers can stop operating for a few hours and not affect the business. Examples might be HR functions, crawlers, back up/long-term data storage, research computers, etc. Computers that might need to be on UPS include sales tools, accounting applications, short-term storage, email, etc. but not every application and function. Think about your own data center operations about what can go down every now and then from a utility outage (usually about once per year for a few hours) and see if you can reduce the total amount of UPS power you require and repurpose that expensive UPS capacity and energy loss to the critical functions.

Some data centers avoid UPSs entirely by putting a small battery on the computer itself, in widely publicized Google’s case, an inexpensive and readily available 9V battery. While this is an excellent idea for those that have custom computer hardware, it is not as easy to implement for most folks buying commodity servers today. Perhaps another idea better for the masses is to locate a capacitor on the computer board or within the server that can ride thru ~20+ seconds until generator(s) can supply load during a utility outage. Capacitor technology of today should make this fairly easy to implement and could be a standard feature on all computer equipment with a minimal added cost, much as the international power supplies did for us 10+ years ago and higher-efficiency power supplies (90+) are today. A great new technology that could make this easy to build on the computer board can be seen here:
http://newscenter.lbl.gov/feature-stories/2010/04/23/micro-supercapacitor/

Using a technology like this we could avoid UPSs entirely in our data centers by having enough ride thru built onto the computer boards, into the hardware, allowing us to save very expensive UPS power capacity, operating and maintenance expenses and space within our data centers for more important functions, compute and storage capacity. My thought for the day. Think about it and you might save some money and energy.

Batteries

Friday, January 29th, 2010

I own a home in the Lake Tahoe region (Nevada side), and it can get a fair amount of snow. Not as much as I’d like most of the time–yes, I like the beauty of the fluffy white water–but around 40′ per year (yes, feet). It often comes in fairly big drops at a time with usually good weather in between. And after seven years of living there (at least part time living there), I finally went out and purchased a snow blower. Before then, it was pure man-power–two snow shovels of different styles depending upon the type and depth of the snow. Yes, I always enjoyed the work out in the fresh air. I have a very large driveway, and used to have a Subaru and other all-wheel or four-wheel drive cars before then, in which I would wait for the garage door to open, the street to be clear of cars, and hit ramming speed so that my car would essentailly fly off the snow on my driveway onto the street. I would repeat the same for getting back into the garage: wait for the door to open, ram the driveway snow and slide into the garage. A couple of near misses of almost sliding into the house were a bit close. Well, after several shovelings in a row with limited time to shovel due to my busy schedule, I thought again about owning a snow blower as a time saver, but I never wanted a 250 pound metal contraption that burns fuel and makes noise, taking up a lot of space all year in my garage for a few uses each year. It seems to make little sense vs my exercise regime. But time was not on my side. Now you’re probably wondering what this blog post has to do with data center or energy…I’m getting to it.

I own a cordless electric lawn mower and weed wacker, each which I love for their silence, no fumes and function-ability. I can mow or trim at 6 AM on a Sunday without disturbing neighbors with noise, I have no tune ups, spark plugs, pull starters, fumes to breathe, oil and gas cans to fill and spill, and less than $3 total per year in electricity. (Yes, I measured it over a full year.) So, I looked into electric snow blowers. Well, there are no cordless ones, only cordless, so I purchased a slightly used Toro electric snow blower and have been amazed with it’s snow throwing ability. It can throw snow, depending upon the depth (the deeper the better) about 10-20′, cuts thru 12″ deep snow, or deeper with a second pass, and clears my larger driveway quicker than my neighbors can do the same with their gasoline powered ones. One problem: cord. Yes, I have a plenty long extension cord and an easy plug I installed for it, but I have to plan out the “route” and keep the cord clear, which is every pass for the first few passes, than about once every ten passes once I get sections cleared. But, a few days ago, feeling lazy and deciding to see what would happen if I ran over the cord instead of clearing it. It was worse than I imagined. Within about one second, the cord was wrapped at least 20 times around the blade, very tightly. After about 5 mins of figuring out how to solve this problem, the unwrapping process was rather easy and quick, but it made me wanting a battery powered snow thrower so as not to mess with a cord at all. Today, battery technologies would make this $250 snow thrower over $1000, many times it’s svelte 35 pound weight, and likely not quite as powerful. It is battery technologies that are one of the greatest keys to solving many challenges, weather to store energy from solar plants, for our many electronic tools and toys, to propelling our transportation. Improved battery technologies at lower cost, weight, and better performance will be key to implementing many energy efficiency, fume reducing, and performance enhancing solutions. Let’s push for much better energy storage technologies; one of the holly grails for many things.

Is it possible, a data center PUE of 1.04, today?

Saturday, August 22nd, 2009

I’ve been involved in the design and development of over $6 billion of data centers, maybe about $10 billion now, I lost count after $5 billion a few years ago, so I’ve seen a few things. One thing I do see in the data center industry is more or less, the same design over and over again. Yes, we push the envelope as an industry, yes, we do design some pretty cool stuff but rarely do we sit down with our client, the end-user, and ask them what they really need. They often tell us a certain Tier level, or availability they want, and the MWs of IT load to support, but what do they really need? Often everyone in the design charrette assumes what a data center should look like without really diving deep into what is important.

When we do that, we can get some very interesting results. For example, I’ve been fortunate to have been involved with the design of three data centers this year and all three we were able to push the envelope of design and ask some of these difficult questions. Rarely did I get the answers from the end-users I wanted to hear, where they really questioned the traditional thinking and what a data center should be and why, but we did get to some unconventional conclusions about what they needed instead of automatically assuming what they needed or wanted. As a consequence, we designed three data centers with low PUEs, or even what I like to call “ultra-low PUEs“, those below 1.10. The first was at 1.08, the next at 1.06 and now we have a 1.046, OK, let’s call it 1.05 since the other two are rounded up as well. (We know we can get that one down to about 1.04 with a few more tweaks to that “what is really needed” question.)

Now, I figured that a PUE of 1.05 was going to take a few years to get to because the hardware needed to improve, i.e. chillers, UPS, transformers, etc. But what I didn’t take into account was that when we really look at what the client needs, not wants, and what we can do to design for efficiency without jumping to the same old way of designing a data center, we can reach some great results. I assume that this principal can apply to almost anything in life.

Now, you ask, how did we get to a PUE of 1.05? Let me hopefully answer a few of your questions: 1) yes, based on annual hourly site weather data; 2) all three have densities of 400-500 watts/sf; 3) all three are roughly Tier III to Tier III+, so all have roughly N+1 (I explain a little more below); 4) all three are in climates that exceed 90F in summer; 5) none use a body of water to transfer heat (i.e. lake, river, etc); 6) all are roughly 10 MWs of IT load, so pretty normal size; 7) all operate within TC9.9 recommended ranges except for a few hours a year within the  allowable range; and most importantly, 8) all have construction budgets equal to or LESS than standard data center construction. Oh, and one more thing: even though each of these sites have some renewable energy generation, this is not counted in the PUE to reduce it; I don’t believe that is in the spirit of the metric.

Now, for some of the juicy details (email or call me for more or read future blog posts). We questioned what they thought a data center should be: how much redundancy did they really need? Could we exceed ASHRAE TC9.9 recommended or even allowable ranges? Did all the IT load really NEED to be on UPS? Was N+1 really needed during the few peak hours a year or could we get by with just N during those few peak hours each year and N+1 the rest of the year?, etc. The main point of this blog post is to say that low PUEs, like that of 1.05, can be achieved, yes, been there and done that now, for the same cost or LESS than a standard design, and done TODAY, saving millions of dollars per year in energy, millions of tons of CO2, millions of dollars of capital cost up front, less maintenance, etc. We just need to really dive deep as to what we need, not what we want or think we need, and we’ll be better at achieving great things. Now, I need to apply this concept to other parts of my life; how about you?