Posts Tagged ‘Yahoo’

Au revoir Olivier Sanche

Tuesday, November 30th, 2010

Data center leader Olivier Sanche passed away Thursday as reported in Data Center Knowledge of a reported heart attack. It is with great sadness to hear this news. Olivier was an industry friend, a fellow leader of energy efficiency ideas. He was blessed with thinking differently about approaches to data centers, a trait he developed before working for Apple, Inc, where he has provided global data center leadership since August of last year.

Olivier and I shared many ideas together at industry meetings about metrics, efficiency and often the sharing the same message, that still the most efficient data center is the one we do not build.

He shared many great experiences and moments with others as well, as written by David Ohara and Mike Manos.

I know the Apple team will miss him terribly, as he was loved by the team of fine folks there, helping to drive positive change within the organization he was building and thinking differently about data center solutions.

It was only a month ago that he and I shared a dinner together. He shared his love of his daughter and his passion for wanting the most environmentally efficient and sustainable data centers. Just over a week ago we exchanged emails, phone conversations and in typical Olivier style, several text messages.

A few months ago we were sitting next to each other with Chris Page of Yahoo at a data center conference, and we shared frustration about how most of the speakers were selling product instead of sharing ideas. He was fun to be around.

I have many other fond memories of Olivier, in which he shared his passion and visions for a more sustainable data center industry. Olivier, your ideas and passion will be sorely missed. You will be missed as a data center leader. But even more so, for certain, your love for your family and daughter.

Au revoir Olivier. We will raise a toast to you.

Switzerland for Yahoo! Data Center

Monday, October 11th, 2010

Yahoo announced that they are building a data center in Avenches, Switzerland. This is a project I worked on over two years ago while I was with Yahoo! I don’t talk about material non-public information from my ex-employers or clients (even if unprotected by an NDA), so now that it is public, I’ll share some more about this site selection and strategy. As Yahoo! has announced during their 2010 investor presentations, in 2007 (when I started as an employee at Y!), over 90% of data center capacity was leased. Due to the strategy I and others put into place and is being implemented through these data center builds, Yahoo! estimates that about 90% of data center capacity in 2013 will be owned, providing a 35% total lower cost by 2014. It is these lower costs that drive companies to build their own data centers, and certainly Yahoo! has the economies of scale to make this effective. (Coincidentally, the lower total costs also makes hardware have a larger cost percentage, proving James Hamilton’s (Amazon) point that server costs are the biggest cost item of data centers.)

Some may ask why put a data center in Switzerland? Well, there are many good reasons to choose Switzerland (CH) for a data center in Europe. It does have higher energy prices than Russia, Ukraine and Finland, as well as the ex-Baltic States–Lithuania (half of my heritage), Latvia and Estonia–yet low for the rest of Europe. With over 1,000 small utilities, energy prices can vary much throughout the country. We can rule out Russia and Ukraine even though they have low energy prices for challenges to conduct business especially building complex data center projects. The ex-Baltic states can be excellent, but there are limitations to the infrastructure in some areas, so not as easy to complete projects. While I was with Google we looked at locations in this area.

CH has excellent fiber all around the country. It’s in the middle of Europe, so latency is superb to all of Europe. Work forces are superb with people speaking multiple languages (English, French, German and a local dialect are national languages) and very educated folks. Construction quality is probably the best in the world, with construction and engineering firms from all over Europe providing skills. Geneva is a one-hour flight from much of Europe including England, France and Germany. Train service is superb, with quality trains, regular schedules, low fares, precisely on time departures and arrivals, making driving seem mundane and so archaic.

A lovely “bonjour” welcomes you everywhere you go. And the Alps seem to surround CH with gorgeous green countryside and quaint small towns. Avenches, where we chose to put this Yahoo! data center, is one of those quaint towns, with a Roman amphitheater and other Roman features and history, a small lake and lovely green fields. Plus, traveling in thru Geneva and then onto Lausanne it beautiful, across the lake from the French town of Evian of the water fame, with reflections of snow-covered Alps glistening off the lake. Can you tell I thoroughly enjoyed working on this project? The food is tops in Europe, the air is clean, the water is clean, the cities are clean, and the power is clean. Avenches even has it’s own district heating loop from a small bio-mass plant. With most of CH’s electricity generated from nuclear and hydro, CH has one of the lowest carbon impacts per kWh anywhere in the world, and this is important to Yahoo! as they strive to be carbon neutral, a very difficult achievement in a growing company with many data centers around the globe.

So CH is clean, beautiful, easy to do business, central to Europe, safe, close to Chamonix and Zermatt for skiing side trips on weekends and it has low taxes, some of the lowest in all of Europe. Making CH a great place to put corporate headquarters as well. The latest Global Competitiveness Report places CH tops in their overall ranking:

“Switzerland tops the overall ranking in The Global Competitiveness Report 2010-2011 released by the World Economic Forum. The United States falls two places to fourth position, overtaken by Sweden (2nd) and Singapore (3rd). The Nordic countries continue to be well positioned in the ranking, with Sweden, Finland (7th) and Denmark (9th) among the top 10, and with Norway at 14th. Sweden overtakes the US and Singapore this year to be placed 2nd overall. The United Kingdom, after falling in the rankings over recent years, moves back up by one place to 12th position.”

I’ve worked on data center projects in all top-10 countries and most countries in Europe. Switzerland is excellent but so is Sweden as they have been working hard to lower taxes, lower energy rates, and energy is VERY clean with low ambient temperatures and lots of low temperature water for direct cooling to reduce energy use. Sweden actually has fairly low effective tax rates, much lower than one would think of a Nordic country. Switzerland is great and so is Sweden—think fiber access into the ex-Soviet block.I predict that we’ll see more new data center developments in the Nordic countries, ex-Baltic states and some in Switzerland, as there’ll be a gradual transition from the key European cities for data centers (London, Amsterdam, Frankfurt & Dublin) to mostly Northern and Eastern Europe for energy costs, lower energy use and lower tax cost benefits. Also to reach that large Russian Internet market.

I’d love to share more about my experiences and knowledge of low-cost, excellent places to put data centers in Europe. Call to schedule a time to discuss your European expansion. In the meantime, enjoy that croissant. Au Revoir!

Humidity control and Chillers

Wednesday, September 29th, 2010

As recently reported, Yahoo has opened their Lockport, NY data center with an air economization focus. While air economization has been talked about much and a few data centers have utilized it, for example the EDS now HP Wynyard data center in the U.K. and the Advanced Data Center in Sacramento (both Rumsey Engineering designs), not many have utilized air economization in fairly warm and humid environments. (Climate for Wynyard is humid yet cool while ADC in Sacramento is warm yet dry.) This is because Yahoo is removing data center humidity control. Many folks balk at this idea, yet the server (network and most storage equipment) specs allow for 5-95% RH, often broader, and the NEBS standard, which has been around for about 50 years for all telco equipment, has never had a humidity requirement. (So why does ASHRAE???? This is a topic for a future blog.) So while some will balk at the idea of no humidity control in a data center, there is no known loss of IT equipment from lack of humidity control. (Intel has published a paper that shows very dramatic humidity changes (10-90% within one hour) can have a potentially very minimal effect on equipment failures, and IBM a study on high humidity with unusual and very high ratios of gas pollutants.) I ran a data center in dry Reno (for a co-lo company I ran) in 2003-2005 and then again for BEA/now Oracle in 2006-2007 with very broad humidity ranges in order to save energy. I slowly expanded the range until no humidification was done at all and humidity ranged from 10-30% year around without any equipment failures, ESD issues, etc. The Yahoo data center will be another excellent test bed to show affects on hardware of no humidity control for a large scale data center.

While air econ is a great way to reduce energy, and while I led data center strategy and development for Yahoo (2007-2008) I led many of these ideas. I’ve found with recent data center design projects (by Rumsey Engineers) that water economization leads to a lower PUE in most climates. Yahoo is forecasting an excellent 1.08 PUE; I hope they’ll share actual usage data after months or years of operation as well as learned experiences.

Perhaps the most novel thing of this recent Yahoo design is the idea of not needing chillers. Yahoo choose to use water towers and water economization for those hot days (which begs the question why not water economization all of the time? Benefits would be only one cooling system not two and lower fan horsepower. Nonetheless, this is an idea I floated when I was with Yahoo and have implemented on three other data center projects over the last year, including one design we recently developed for a large financial data center at very high redundancy, and while still meeting Tier IV availability and high density without chillers, we achieved a 1.08 PUE. The really impressive part is that our total construction budget is about $3 million per MW of IT load; a total construction cost so low I haven’t seen before. In many climates chillers are not needed to maintain an ASHRAE allowable supply temperature range and I commend Yahoo for going chiller-less in a humid and warm climate. If they can do this, we should all consider why we need humidity control and such tight supply air ranges. Less equipment not only equals lower capex and opex, but also higher availability. We don’t need equipment to gain efficiency or availability, often we need less equipment to achieve those things, and that is what all of our designs consider.

On to other topics, I have some high level electrical and mechanical engineering positions I have a client in search of; let me know if you or someone you know is interested.

Pardons for my lack of recent blog posting; a wonderful one week holiday in France (a few selected photos I took on this trip here) combined with many new projects and project advancements lately has kept me working 20 hour days for several weeks now. Nonetheless, thank you for reading; I’ll try to keep you posted more frequently about many of the exiting projects I am working on. You can hear more from me at the SVLG Data Center Efficiency Summit on Oct 14th, or at my data center efficiency workshop in NYC in October. Au revoir!

Smaller, modular data centers & data center news and job post

Tuesday, August 10th, 2010

As I’ve worked in the data center industry for over 12 years, I’ve seen data centers get larger and larger. When I was with Sun Microsystems, we had over a 1,000 data center closets, labs and rooms, but no large data centers. There were many challenges to providing and maintaining all of these “mini” data centers, each wanting its own UPS and generator support while needing to run house air conditioning units 24/7 in office buildings that should had been shut off on weekends and evenings. I ran the numbers and realized we could supply all of these needs in a larger, shared data center for a much lower total cost. I proposed to Michael Lehman, then Sun’s CFO, the plan of an internal co-location data center complete with separate cages for each group to securely house their servers. This was around 1998.
Next when I was with Exodus Communications, the company that started the co-location industry, again the math played in favor of bigger is better, or at least lower cost. As a member of the “build team” running around the world finding and negotiating the next spot to locate a data center, then designing them, the larger we made the data centers the lower the total cost per unit to build, own and operate them.

Later, I was with Google operating and acquiring large data centers, and I had the privilege of running the largest data center in square feet that I’ve ever known. Now fast forward years of data center design, construction, operations and efficiency programs for data centers, and I’ve come to see that while larger may be lower cost to build and operate at the time of construction, in most cases larger data centers cost more over time than ‘medium-sized’ (relative size to time to load up) data centers. Why? Large data centers are rarely future proof. Our server technology leaps ahead a generation every 18-months; software generations are often 6-12 months. New infrastructure solutions are coming along every year, and capacity planning is rarely good more than 6-12 months out and sorely inaccurate at that. Case in point: a data center I built for Yahoo that would take 3-5 years to fill wanted to be modified to accommodate new cooling technologies and wanted to be moved to a different state to take advantage of changing tax laws. Yet modifying a concrete shell and relocating an entire and new data center are impractical and often costly solutions.

If we build data centers that are scalable in that they the are smaller and thus at the scale of 1-3 years of capacity, then we can always implement new technologies, solutions and changing business capacity and needs quickly and at lower cost over time. This may mean we plan for the next ten years when we site select our data center “campus”, but build out smaller shells in more frequent build cycles than one large building that can last for 5+years. I spoke about this in a recent ComputerWorld article.

The article also mentions the “butterfly” data center concept from HP, discussed further in this article. The key tenant of the plan is to build smaller buildings on a campus with shared network, personal and other benefits. Why not also build a small computer factory on the campus to serve the data centers and surrounding area? Another concept to this approach is with containers, which eBay just added themselves into the fold of those considering containers for future modular scalability.

I’ve built many of the lowest cost data centers ($4-8 million per MW of IT load) and the most energy efficient data centers (PUE 1.04-1.10), and I believe these smaller data centers can be built for very comparable cost and efficiency figures, likely the same, and perhaps even lower, especially over time as retrofits are less likely and less costly. I’ve seen examples of this in many of the previous data center projects I’ve been working on, which are 10+ MWs in size, ‘medium’ in relative terms.

Various data center news:
Fast growing company looking for a data center manager position in Santa Clara, IT focused, Linix & storage. Write me if you are interested.

Terremark Worldwide release solid Q1 results, raising annual revenue projections. Guidance for 2011 assumes no federal IT project revenue although that has been accounting for 10% of revenue. Cloud revenue growing and now accounts for 8% of total revenue.

Internap Network Services released its Q2 results, which were on target with revenue and margins, significantly increasing gross margin by exiting partner data centers into company owned data centers. I see this trend over time, as folks move from outsourced into company owned data centers. I can help you with this transition, having completed it and the best strategy to do so many times now with very excellent results.

Investment in “greener data centers” to increase over 5x in 5 years per Pike Research or put another way, capture 28% of global market by 2015 in this story, that I highlighted in a previous blog post.