Archive for the ‘Sustainability’ Category

How to save on water costs in your data center

Sunday, April 15th, 2012

Two weeks ago I spoke at the Recycled Water Use and Outreach Workshop in Sacramento. I know what you’re asking, “why is a data center guy talking at a recycled water conference?” Well, funny that you asked.

First of all, most of my ultra-efficient designs use water for cooling, often indirect evaporative systems. Hence, we trade energy use for water use. Now water is far less costly than energy and often has a much lower carbon footprint and other environmental impact per unit of cooling than electricity. But it always is a bonus to use recycled water, as it has an even lower environmental impact than standard potable supply. Of course, all water IS recycled. There are only a finite number of water drops on this wonderful planet that sustains us and every one of them has been around the water cycle block at least a few times, so in essence, all water is recycled.

As we use water to help or entirely cool our data centers, water plays an even greater role in data centers to achieve the greatest efficiency. Hence, water quality, capacity, cost and reliability of service are just as important as any other valuable input into our system of operations, making these factors and the future cost of water even more important into our site selection decisions. I’ve seen water cost between $.10 to $10.00 per 1,000 gallons—wow! What a spread! And I’ve seen it increase at 40% rates per year! Wouldn’t it be nice to have a consistent price from a non-profit water system that YOU have control over and full visibility into all costs? And one that is built to meet the high-availability and quality standards for data centers, and is DEDICATED to data center use? That is what you get at the Reno Technology Park!

And it’s not just the supply but also the discharge of water. I learned much about water discharge challenges in Quincy, WA, when building the Yahoo! data center there, as the local water utility wanted Microsoft and Yahoo! to pony up $10-15 million to pay for a new water treatment plant to handle the QUANTITY of our discharge water. Our quality was fine, but the quantity was too much for the current systems. This led me to find solutions to reduce the cooling tower blow down and avoid this $10+ million unplanned cost to our project.

I’ve always been a fan of chemical-free water treatment systems, but when looking for new solutions to solve our problem, I came across WCTI, which makes a chemical-free system quite different than other systems, and could provide us a system to get the cycles of concentration up over 200!!! Yes, that is over 200 cycles of concentration, which means nearly zero blow down! Which means it lowers water consumption by 30-50% and avoidance of paying for a new water treatment plant for the city. And it’s truly chemical free (even no biocides), which means it’s safer for people and the environment, as well as much lower cost. Keep those chiller tubes and/or pipes clean!

This is one of the comprehensive solutions that we provide for our clients at MegaWatt Consulting. It’s about saving money, and water is just another critical part of our system. Reach out to us to learn more!

Coal Burning Power Plants must Finally Reduce Mercury emission

Thursday, March 1st, 2012

Coal burning power plants account for the vast majority of the mercury that we contact. I’ve read statistics that 80-95% of the mercury that we contact comes from coal burning power plants. In the US, it is estimated that coal-fired power plants are responsible for half of the nation’s mercury emissions.

The mercury in the emissions literally rains down on the oceans and land falling on crops that we eat, in the rivers and oceans that we fish, and on our backyards and into our lungs. Mercury leads to many very serious mental and physical disorders.

“According to the U.S Environmental Protection Agency, mercury is responsible for thousands of premature deaths and heart attacks. It can also damage children’s nervous systems and harm their ability to think and learn. The mercury, in essence, falls back to earth where it gets into the food chain.” (energy biz, “Obama Showers Coal with Mercury Rule”, Jan 3, 2012–http://www.energybiz.com/article/12/01/obama-showers-coal-mercury-rule). I’ve read in EPA reports that there is estimated to be 50,000 pre-mature deaths every year in the US due to the emissions from coal-burning power plants. Imagine loosing an entire city of 50,000 people every year? That is a city in population not much different than Palo Alto, CA. And that figure does not count the number of lung-related issues such as asthma that develop from these emissions.

Well, the Clean Air Act provides each of us the right to clean air. As such, in December, 2011, “the EPA carried out its obligation under the 1990 Clean Air Act and demanded that coal-fired power plants implement the available technologies to reduce their emissions by 90 percent.”

These regulations are not a shock to most utilities, as they have been aware of the pending regulations for some time (since the clean air act was put into law), and most utilities actually support the law as it allows them to shut down old coal-fired power plants, which are a financial, legal and environmental liability in exchange for building new, cleaner burning and more efficient power plants. These new regulations really only affect coal plants that were constructed 30 to 50 years ago. The operators can choose to have them meet the new requirements or shut down and replace them with new, more efficient and less polluting plants– a decision compelled not just by the new regulations but also by the need to compete with lower cost shale gas. Since most utilities in the US get a return on building new infrastructure, it is good business to build new power plants. Essentially, it sets a more level playing field to the 1,400 coal-fired US power plants and ends 20 years of uncertainty about these regulations.

Will these new regulations cause electricity prices to increase? Yes, but not likely significantly, as the “EPA estimates that the cost of carrying out the new mercury rules will be about $9.6 billion annually. But it also says that payback will be as much as $90 billion by 2016 when all power plants are expected to be in compliance, or closed. The agency expects “small changes” in the average retail electricity rates, noting that the shift to abundant shale-gas will shield consumers.” I agree with that assessment, as shale-gas will keep prices down. Even though “The American Coalition for Clean Coal Electricity says that the new mercury rule, in combination with other pending coal-related regulations, will increase electricity prices by $170 billion” through 2020, a estimate not much different than the EPA’s and also one to likely have a very minimal affect on electricity prices since it is such a small percentage of total electricity spend per year.

The same group says that “Coal helps make electricity affordable for families and businesses,” says Steve Miller, chief executive of the coal group. “Unfortunately, this new rule is likely to be the most expensive rule ever imposed on coal-fueled power plants which are responsible for providing affordable electricity.” Of course, when one accounts for health-related costs, the new emissions rules are far less costly than paying for your son’s asthma medicine and your father’s lung cancer treatments. Finally, we are getting slightly cleaner air, something the clean air act provided to us by law over 40 years ago.

Call for Case Studies and Data Center Efficiency Projects

Wednesday, February 15th, 2012

As many of you know, I have chaired what has become known as the SVLG Data Center Efficiency Summit since the end of it’s first year’s program. That was fall of 2008. A wonderful summit held at Sun Microsystem’s Santa Clara campus. This has been a customer-focused, volunteer-driven project with case studies presented by end-users about their efficiency achievements. The goal is for all case studies to share actual results of the savings to show what works, best ways to improve efficiency and to provide ideas and support for all kinds of efficiency improvements within our data centers. We’ve highlighted software, hardware and infrastructure improvements, as well as new technologies and processes, in the effort that we all gain when we share. Through collaboration we all improve. And as an industry, if we all improve, we avoid over-regulation, we all help to preserve our precious energy supplies and keep their costs from escalating as quickly. We all help to reduce emissions generated as an industry and drive innovation. In essence, we all gain when we share ideas with each other.

As such, I have thought of this program to be immensely valuable as an industry tool to efficiency and improvement for all. Consequently, I have volunteered hundreds of hours of my time and forgiven personal financial gain to chair and help advance this program along with many other volunteers who have also given much of their time to advance this successful and valuable program. I do not have the resources to continually give of my volunteer time–I wish I did–but do hope to provide more support or time with future corporate sponsorship.

I do hope that you can participate in this valuable program and the corresponding event held in the late fall every year since 2008. Below is more information from the SVLG. You can also call me for more info.

Attention data center operators, IT managers, energy managers, engineers and vendors of green data center technologies: A call for case studies and demonstration projects is now open for the fifth annual Data Center Efficiency Summit to be held in November 2012.

The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and the Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users – not marketing pitches.

We welcome case studies presented by an end user or customer. If you are the vendor of an exciting new technology, please work with your customers to submit a case study. Case studies of built projects with actual performance data are preferred.

Topics to consider:
Energy Efficiency and/or Demand Response
Efficient Cooling (Example: Liquid Immersion Cooling)
Efficient Power Distribution (Example: DC Power)
IT Impact on Energy Efficiency (Example: Energy Impact of Data Security)
Energy Efficient Data Center Operations
In the final version of your case study, you will need to include:
Quantifiable savings in terms of kWh savings, percentage reduction in energy consumption, annual dollar savings for the data center, or CO2 reduction
Costs and ROI including all implementation costs with a breakdown (hardware, software, services, etc) and time horizon for savings
Description of site environment (age, size or load, production or R&D use)
List of any technology vendors or NGO partners associated with project
Please submit a short (1 page or less) statement of interest and description of your project or concept by March 2, 2012 to asmart@svlg.org with subject heading: DCES12. Final case studies will need to be submitted in August 2012. Submissions will be reviewed and considered in the context of this event.
Interested in setting up a demonstration project at your facility? We may be able to provide technical support and independent evaluation. Please call Anne at 408-501-7871 for information.

Data Center Site Selections need to be more comprehensive than they once were

Thursday, July 7th, 2011

Having completed site selections for many data centers and in about 20+ countries, including completing site selections for Yahoo, Google, Facebook, Equinix, Exodus, and many others, I’ve learned quite a few things. I’ve been part of the changing criteria evolving from just being near fiber lines to adding in power capacity, energy price, sales taxes, property taxes, and now including climate, carbon impact and water supply as well. I think we’ll soon see income tax added in to as a major cost driver for site selections. Having been doing this for over a decade, I’ve taken on these new elements of data center site selection, driving the focus on them. Nearly 10 years ago I considered power capacity, energy price, water supply, carbon intensity of power supply, climate and taxes, only to see the industry finally accepting all of these principals as primary decision factors.

While risk of natural and human disasters has always been a part of every data center site selection, it has seriously changed. My 20+ page checklist of hundreds of items from nearby man holes covers, flight paths and train tracks to nearest police station has not been as heavily used, as it seems to add less value than thinking about the BIG natural disasters that can occur and unforeseen human caused disasters. While we used to worry about trucks of guys jumping out with AK47’s to break into a data center, the reality is, this is a thin probability and one that is difficult to prevent. Meanwhile, the ones we can prevent that are known, potential and unforeseen, are the ones we have not focused on well.

For example, has anyone thought about their utility system being hacked and shut down for an extended period of time? Have you asked your electric utility if they are NERC CIP compliant to ensure that they have a much lower chance of being hacked and shut down? Have you thought about your electric utility meter, water meter, main switchboard and generator switchgear being connected to the Internet and/or your utilities and thus being able to be hacked into, shut down, or damaged?

And the main thing, how about natural disasters? As an industry, we’ve built data centers in seismically active areas (i.e. Japan, California, Oregon (also with extreme tsunami risks) and Washington) and build so the building stays up but don’t think about all of the IT gear shuffling about and the personnel getting hurt. A building that stands while the IT gear is rolling around like marbles isn’t a data center that will sustain an earthquake, only one that will memorialize that happened while we rebuild the inside.

We build data centers in hurricane and tornado areas (Texas, Kansas, Nebraska, North Carolina, South Carolina, Virginia, Georgia) and build for them pretty well, but do we think about what has not to come yet but likely will?

I’ve written before about the most dangerous seismic area in the US not being on the West Coast or even the East Coast, but yet the Madras Fault being right under Kansas City, St. Louis and a large part of middle America.

Lately we’ve had tremendous flooding along the Mississippi and Missouri Rivers; yesterday tremendous dust storms hit Arizona–look at these amazing photos of a dust storm that hit Phoenix yesterday. (http://www.time.com/time/photogallery/0,29307,2081646_2290849,00.html). And likely future heat storms will add to the dust storms in the Phoenix area. Do you want your data center operating in this?

Severe hurricane and tornado frequency has increased many fold over the last decade and we saw more serious renditions of each over the last several months, including in places we weren’t expecting to see them, such as Massachusetts and Missouri, where tornados tore thru very robust buildings, even a hospital data center (http://www.datacenterdynamics.com/focus/archive/2011/06/missouri-tornado-destroys-hospital-data-center). Look at these photos of devasatation in Alabama from recent tornadoes: (http://www.nytimes.com/2011/05/05/us/05missing.html?_r=1&nl=todaysheadlines&emc=tha23). Imagine you and your employees following something like this–would they even come to work? One would likely need to shut down their data center for human resources even if everything kept working.

My point is not to overlook the seriousness of your data center site selection. Consider what MAY happen, with some probability, and don’t assume that just because it hasn’t happened, that it really hasn’t or won’t. Research the probabilities. The web is wonderful tool for this information, and so are your data center site selection experts at MegaWatt Consulting and others. Use us to help you avoid future problems.

Stay healthy and let’s help each other grow our industry. KC Mares

Considering all of the vulnerabilities of data center sites

Thursday, May 5th, 2011

Where to hide your data center and protect it from damaging natural disasters?

I have built two data centers in the Raleigh, North Carolina. I traveled to Raleigh about once per month over a couple of years for these projects, many times driving in ice storms. It’s really quite fun to drive around when everything is coated in a sheet of ice. It’s like driving a Zamboni without an ice rink. Quite frankly, only people like me who have too much confidence in their driving abilities drive—everyone else stays home and for good reason as many cars are stuck on the roads and crashed up while driving in these conditions. Recently, storms in the Raleigh area caused a wide path of “death and damage” as reported here in the NY Times–declaring emergencies throughout North Caroline, Mississippi and Alabama. More extreme weather is predicted for the eastern seaboard with the ever-increasing climate change. Hurricane frequency and strength has increased several times over the last few years. Remember when one good hurricane a year was normal? Now it’s dozens, so much so, that the naming convention has changed completely from alphabetical names to female names to including male names and now numbered names similar to star systems.

Remember when California was the only place we expected to receive large earthquakes? Well, except for Japan, which reminded us once again of the devastation that can occur being along the Pacific Rim. I was in middle Baja following the recent Japan earthquake and had to change plans due to a tsunami warning from the Japan earthquake nearly 10,000 miles away, proving the point that near the ocean following an earthquake can be risky.

The largest earthquake in 35 years hits Arkansas…what you ask?! Arkansas? Yes, the largest in that state yet amongst more than 800 earthquakes in Arkansas since September 2010. Wow!! You can read more about it in this AP/Yahoo news article.

But even more spectacular–as I bring up earthquakes in Arkansas as merely an example–is that the largest risk of large-scale damage from an earthquake in the US is located right under the middle of the US, the New Madris Fault. Directly under Kentucky, Indiana, Illinois, Tenessee, Missippii, Arkansas, this baby is HUGE! With an ability to create horizontal acceleration of 1.89g, almost 5 times greater than the amount of ground acceleration at The Reno Technology Park near Reno, NV, which is located on stable ground absent of any earthquake faults. See this thesis on the affects of earthquakes on bridge design, which is the pinnacle of civil engineering for earthquakes, as they look at 75-year affects, not 20 as it is for most building construction. Even Texas is not immune to earthquakes. Having damaging earthquakes in 1882, 1891, 1917, 1925, 1931, 1932, 1936, 1948, 1951, 1957, 1964, 1966, 1969, and 1974. Many of these being felt as much as two states away from Texas, which covers a very large area. I type out all of these just to prove the point that even areas thought to be immune from damaging earthquakes have them, and more frequently than we care to remember. You can read more in this USGS article about Texas earthquakes here.

And thus the punch line is to consider data center site selection very carefully. Just because an earthquake has not happened for a long time does not mean that an area is not immune to a damaging earthquake. Check out this map of large earthquake potential and look at the two large circles of converging lines in the middle of the US and under South Carolina—these are the areas of greatest earthquake threat to public and buildings in the US:

How about volcanoes? Sure why worry unless you’re in the South Pacific, Hawaii or Costa Rica, right? Wrong. Over half of the world’s active volcanoes are in … did you guess…. The good ‘ole US of A. That’s right. Most of those are in Alaska as the Aleutian island chain is a pretty exciting place to be. And most of the them in the Continental US are located in Washington and Oregon. But guess what, the most exciting place in the US for a very damaging earthquake of proportions 1,000’s of times greater than the atomic bombs exploded on Japan to end Wrold War II? Wyoming. Yellowstone has been famous for Old Faithful. Heated by a geological hot spot, the same type that has created and is still creating the Hawaiian Islands. But new research calls it a supervolcano. Two of the larger eruptions from this supervolcano produced 2,500 times more ash than Mt St. Helens eruption in 1980, and that provided about 10’ of ash through eastern Washington and elsewhere. And this hot spot is getting hotter. Expected to impact Idaho, Wyoming and Montana with a greater frequency of earthquakes and a possible very large explosion that could wipe out a very large area. Read more about it here.

Why are these important to point out? Because we’ve designed and built data centers to withstand the impacts of what we EXPECT in a certain area, yet so many areas have more impacts than we imagined. Which leads me to site selection. Site selection isn’t so easy as to look at what has recently occurred or what we think might occur in an area; it should involve thorough research and understanding of what really are the risks over time and choose a site that best meets our risk tolerance/”comfort” during the life of the data center. And any risks should be reviewed, even those that seem unlikely, as we can see from many of these examples, that unlikely events can turn out to be devastating to any data center. Hence, location research is paramount to good site selection and these issues not overlooked. A good example is the over 20 active volcanoes in the Portland and Seattle area. Be aware of the risks in your decision or it could lead to a really bad day.