Archive for the ‘Renewable Energy’ Category

Integrating the UPS, the PDU, the rack and more

Friday, April 5th, 2019

I’ve been thinking about this for over a decade and will finally share my thoughts on what I’ve been thinking. Why do we have in our data centers, a frame, aka rack, that we then literally bolt many hardware devices to it, run power cables from each of them to separate devices, and also network cables to other separate devices, each like the hardware, also with its own network and power connections. Each of these adds devices and thus takes up available racks spaces, while also adding network connections, and adding power and network cables and cords, all taking up space, requiring routing and management of all of these cables. Obviously there is not just a cost of the cables, but an environmental impact to making them, and a cost to managing them. We spend money on cable management to make all of these cables look pretty–a tell-tell sign that the IT team has it together, much like the sign a clean desk can tell, whether falsely or accurately.

However, what if the rack, the PDU, the UPS, the networking patch panel, and all of these cables were integrated into one device? So that we didn’t spend hours cabling, wiring, attaching these, and spending money on them, while they inhibit air flow, add weight and cost, as well as entanglement and complexity to quickly discern where a problem lies. What if each piece of hardware had it’s own current transducer on the incoming power to measure and report power usage, and that the same incoming power feed also had a resettable breaker so that the power could be remotely turned on or off, much the same as the rack mounted PDU, but why is this in a separate device instead of integrated within the same hardware device that already has a network connection and ability to collect and report this information?

Why don’t we have pieces of hardware that provide all of the power transformation, rectification, energy storage, measurement, reporting, and remote control by circuit for the entire rack within the rack? Why do we need at least three different devices doing this, all far away from the actual load, i.e. a centralized UPS, a floor mounted PDU or other transformer and circuit panel, a rack mount PDU, and a power supply for voltage and AC/DC conversion? Doesn’t this seem silly to have so many different devices, and so many cables and wires and corresponding components, when all of this can easily be within one device? With one network connection instead of all of these separate and discrete network connections, translators, monitors, and other tools.

Google integrated energy storage on the hardware device back in 2002 or even earlier. It was a paradigm shift of approach, and I feel like we are on the edge of this again, in that finally energy storage is available in a way that we can integrate it into the rack or even each hardware device, and so should we also integrate the power measurement and monitoring, power control, power transformation and rectification, along with energy storage into one device or every device. That device could be the rack, it can be a separate hardware device much like our rack-mount power supplies. But overall the result is a large reduction in power cables and cable management, a reduction in network ports, and a very large reduction in number of components/devices within the power chain of the data center. At least the approach I envision does. It reduces total cables to four: two network and two power cables per rack, no others…at all. I see little to no value in all of these discreet devices remaining separate, and only benefits to them being integrated from a cost, a management, an environmental impact, and ease of use and design of our data centers. Imagine a data center that is not hamstrung by its UPS capacity or other electrical components, but instead only limited by its onsite energy generation and utility capacity. Yes, cooling capacity will be the next topic to solve, but quite frankly, I’ve been helping design solutions that have dramatically reduced cooling losses and overall cooling capacity for over a decade and I see lots of solutions to provide for future capacity increases than we have today for electrical capacity increases.

Yet, if we provide racks that have the exact power capacity the hardware needs, with the energy storage that the devices in that rack requires, and then remove all of the other “clutter” by better integrating these disparate components, we have data centers that take another leap to costing less to build and operate, are more energy efficient and essentially future-proof to the needs of the hardware and its future business uptime needs, while directly scaling the electrical infrastructure to match the concurrent needs. Why have we not yet done this?

I may be giving away the secret sauce of a great idea or that of what is already in the works by others. I’ve been talking about this idea and others related to it for years with close industry friends, and yet I am still surprised that it has not yet been done. And yet the methods to provide a better data center electrical system are right in front of us: common components, integrated together in a thoughtful yet much lower TCO. So explain to me why we are not thinking outside of the same component device boxes and advancing our data center electrical systems?

My story in Reno and receiving the Technologist of the Year award

Monday, April 3rd, 2017

A few nights ago I was honored to receive an award from NCET as Technologist of the Year. This journey started nearly 15 years ago, so I thought I would share more about it.

In 2002 I finished the build out of a colocation data center in Reno, Nevada. I never thought I would come to Reno yet an opportunity to lead a colocation data center company focused on middle sized but underserved cities was appealing for many reasons. Early with this data center in Reno I experimented with and used air economization and hot-cold aisle containment, each very unknown ways to improve data center energy efficiency, perhaps the first use of these techniques, and they did significantly reduce energy use.

Starting in 2004 and for over a decade I worked mostly remotely from Reno for Google (when we started buying, designing, building and operating internal data centers), and Equinix (the largest data center provider), DuPont Fabros (at the time second largest wholesale data center provider), Yahoo!, in which I managed global data center strategy and development at a time when we were building out large internal data centers and expansions around the globe. I also worked for BEA Systems before acquired by Oracle running their global data centers, and completed long-term marketing and product development consulting for Digital Realty, the largest wholesale data center provider, and many others, including Facebook and other Big 7 Internet companies. I call Apple, Google, Microsoft, Amazon, Facebook, Yahoo! and eBay the Big 7, as they build, own and operate the majority of data centers, outspending data center capital every year of all of the colocation providers by a factor of almost 10. I have been lucky enough to work with five of these seven big data center companies.

In the midst of this, myself and others worked together to create and build the Reno Technology Park (RTP), the largest dedicated data center campus known at the time, located just outside of Reno in Washoe County. I worked with many companies to influence them to locate a future data center in Reno, and secured Apple as the first tenant of the RTP.

While maintaining a residence in the Reno area, with its very close proximity to Lake Tahoe, fabulous skiing, mountain biking, cycling and other activities that I love to do and have spent much time in the area for years playing. Yet with a home in the area, I avoid the congestion and high cost of living of the SF Bay Area and also a state income tax. There are many workers in technology companies that live and work in the Reno area and many like me that live in the Reno-Tahoe area yet commute to the Bay Area or elsewhere for work as needed, including executives of technology companies.

Because of the many great companies and people working in the Reno area I am even more humbled to receive this award. Thank you NCET and the board for this recognition and Abbi Whitaker for her nomination. Having developed data centers in over 20 countries and data center site selections in almost 30 countries as well as throughout the United States, I saw that Reno Nevada was a good place to locate data centers, and that they would be great for the local economy. I wanted to bring my industry to my home, and see the local economy continuing to grow and evolve.

I commend the team at EDAWN, Governor Sandoval and his staff including Steve Hill for helping to make these wins. I look forward to continuing to work with our community, all of you, NCET and EDAWN to see Reno’s economy grow and develop.

Apple+Reno+Solar = “Controllable Power”

Monday, July 8th, 2013

Some of you know that I have developed the Reno Technology Park along with a few others. I am the sole data center expert in the group and when I first viewed the property, I saw that it had potential as a site for data centers with the property being laced with electricity and natural gas transmission lines, main fiber routes crossing thru the property, and proximity to clean power plants. However, that infrastructure was not enough to sway me to get involved. The project needed lower cost power and tax options.

At my insistence, we created some unique tax incentives, but as a data center power guy for nearly two decades negotiating power deals and developing power plants, I saw the real potential was for clean, “controllable” power. I brought Apple to the site last spring and they too saw the same potential.

Fast forward now just over a year, and Apple has one operational data center building, a second data center building fast approaching commissioning, and now an announcement of a nearby 18-MegaWatt solar project near the Reno Technology Park. Here are some links to public articles about these announcements:

Being under NDA with Apple, I cannot expand upon these articles with information from other sources. So let’s talk about what I mean by “controllable power”. The ability to take control of what I call the “Three C’s”: cost, capacity and control. Control being the deliverability, schedule and mix of that power, as well as controlling the future cost of the electricity. Cost being current and future costs, as when we plan to operate a data center, we must take into account the total electricity cost over the expected life, usually 10-20 years. And ideally, we don’t just want a low cost today, but more importantly a low average cost over that life cycle. I see too many folks run to a market with low-cost electricity today but not realize that those low costs will go up, and often within 1-3 years and to an average much higher than other location options. Predicting and seeing these future costs is one of the key advantages to using MegaWatt Consulting for your data center site selections and not another company, as I do not see any other company looking at all of the factors that will influence future data center costs like we do. Do you want to choose a site that has great costs before you start constructing yet high costs by the time you fill it and be surprised that your site is not a low cost site a few years from now, or go to a site that will continue to provide low costs for years to come?
And capacity is key, as there is a cost to bringing power capacity to a project and sometimes that is enormous. For example, a few years ago I was consulting for Equinix and the cost they were quoted by the utility to bring power capacity to a site was equal to nearly one-third of the construction cost for an entire new and large data center! That would have added nearly 50% to the total construction budget! I was able to negotiate that down to less than 10% of the total project budget, but still a very large expense and one that is often not accounted for during site selection TCO estimates. All proving the point that controllability of power over time–each it’s cost, capacity, mix and deliverability—provide significant benefits to a company and it’s costs over time.

Whether or not Apple is responding to pressure from Greenpeace, NY Times’ articles, their stockholders, consumers or other shareholders, having a data center site that can provide flexibility for the many factors over time is key to adjust to changing needs. Whether those needs are costs, the fuel mix, deliverability or reliability of that power, all provide significant benefits when they can be controlled to meet changing needs over time. And all needs change over time, and being that electricity cost drives a 10-year Net-Present Value analysis of data center ownership, “controllable power” is essential to good data center cost management.

If you’d like “to take control” of your data center’s a key driver of current and future costs, as well as combat changing pressures from shareholders, markets and other factors, let’s talk about some options.

Coal Burning Power Plants must Finally Reduce Mercury emission

Thursday, March 1st, 2012

Coal burning power plants account for the vast majority of the mercury that we contact. I’ve read statistics that 80-95% of the mercury that we contact comes from coal burning power plants. In the US, it is estimated that coal-fired power plants are responsible for half of the nation’s mercury emissions.

The mercury in the emissions literally rains down on the oceans and land falling on crops that we eat, in the rivers and oceans that we fish, and on our backyards and into our lungs. Mercury leads to many very serious mental and physical disorders.

“According to the U.S Environmental Protection Agency, mercury is responsible for thousands of premature deaths and heart attacks. It can also damage children’s nervous systems and harm their ability to think and learn. The mercury, in essence, falls back to earth where it gets into the food chain.” (energy biz, “Obama Showers Coal with Mercury Rule”, Jan 3, 2012– I’ve read in EPA reports that there is estimated to be 50,000 pre-mature deaths every year in the US due to the emissions from coal-burning power plants. Imagine loosing an entire city of 50,000 people every year? That is a city in population not much different than Palo Alto, CA. And that figure does not count the number of lung-related issues such as asthma that develop from these emissions.

Well, the Clean Air Act provides each of us the right to clean air. As such, in December, 2011, “the EPA carried out its obligation under the 1990 Clean Air Act and demanded that coal-fired power plants implement the available technologies to reduce their emissions by 90 percent.”

These regulations are not a shock to most utilities, as they have been aware of the pending regulations for some time (since the clean air act was put into law), and most utilities actually support the law as it allows them to shut down old coal-fired power plants, which are a financial, legal and environmental liability in exchange for building new, cleaner burning and more efficient power plants. These new regulations really only affect coal plants that were constructed 30 to 50 years ago. The operators can choose to have them meet the new requirements or shut down and replace them with new, more efficient and less polluting plants– a decision compelled not just by the new regulations but also by the need to compete with lower cost shale gas. Since most utilities in the US get a return on building new infrastructure, it is good business to build new power plants. Essentially, it sets a more level playing field to the 1,400 coal-fired US power plants and ends 20 years of uncertainty about these regulations.

Will these new regulations cause electricity prices to increase? Yes, but not likely significantly, as the “EPA estimates that the cost of carrying out the new mercury rules will be about $9.6 billion annually. But it also says that payback will be as much as $90 billion by 2016 when all power plants are expected to be in compliance, or closed. The agency expects “small changes” in the average retail electricity rates, noting that the shift to abundant shale-gas will shield consumers.” I agree with that assessment, as shale-gas will keep prices down. Even though “The American Coalition for Clean Coal Electricity says that the new mercury rule, in combination with other pending coal-related regulations, will increase electricity prices by $170 billion” through 2020, a estimate not much different than the EPA’s and also one to likely have a very minimal affect on electricity prices since it is such a small percentage of total electricity spend per year.

The same group says that “Coal helps make electricity affordable for families and businesses,” says Steve Miller, chief executive of the coal group. “Unfortunately, this new rule is likely to be the most expensive rule ever imposed on coal-fueled power plants which are responsible for providing affordable electricity.” Of course, when one accounts for health-related costs, the new emissions rules are far less costly than paying for your son’s asthma medicine and your father’s lung cancer treatments. Finally, we are getting slightly cleaner air, something the clean air act provided to us by law over 40 years ago.

Opening and Closing Remarks at the 2010 Data Center Efficiency Summit

Monday, October 25th, 2010

As I recently posted, the SVLG Data Center Efficiency Summit on Oct 14th, hosted at Brocade in San Jose was an excellent event. Having co-chaired the demonstration program and putting on the summit, I may be biased, but ask any of the 500+ attendees and I think you’ll hear fantastic reviews of the event, with all presentations by end-users of their case studies or research projects. There were no commercial presentations. We had a VC panel, IT sessions, cooling sessions, air-economization, detailed discussion of the lack of contamination and corrosion from outside air, containment, containerized data centers, new metrics and standards for data centers (LEED, Title 24, Revised PUE), etc.

Some folks have asked for me to post my opening and closing remarks, which I am doing here. These are my opening remarks to the full audience at Brocade:

“I have been so fortunate to co-chair this program for a second year. A program based upon a single idea that if we collaborate, we all help each other run and operate our data centers more efficiently. In this forum we educate each other from our own experiences on how we have reduced costs and energy use. Following my own mantra, saving energy is always saving money; this is especially true in a data center.

Reducing energy use has the most economic impact and net carbon reduction–a NegaWatt as Amory Lovins has termed it–for a kWh saved is more efficient and at lower cost than the cleanest kWh generated.

While we can locate data centers in Iceland, Northern Canada Reno, NV with 100% renewable energy supply, the location of our data centers is usually driven by many requirements. While you’ll see case studies today of data centers working to create zero carbon data centers, companies have a fiduciary responsibility to control costs and reducing energy use provides the greatest cost savings and environmental impact reductions for our growing user needs.

With an estimated 17 million servers in the US in 2009 (IDC), every month these servers support nearly 2 billion search users, over 500 million shopping and approaching a billion social networking users; 100 million daily tweets, nearly all of our financial transactions, all of our blogging, and becoming the source for news, knowledge, travel, gaming, entertainment, and sharing and archiving our precious family photos.

The estimated 2.6 million data centers in the US are the hubs of our Internet, and Internet traffic grew a whopping 62% in 2010 after growing 74% in 2009. International Internet Bandwidth has grown over 10 times since 2002. The majority of the growth is from India, the Middle East and Eastern Europe with each growing over 100% per year. And we’re just beginning to accommodate cloud and mobile computing.

While our industry has shown staggering growth, we have become more efficient in what I call the efficiency renaissance of the last few years. We are providing these services at far greater efficiency than methods used only a decade ago. Amazon reports that they can now accommodate twice the servers on the same power draw of before. A recent EPA report shows average US PUE is about 2–which is half the infrastructure usage of average world PUEs of about 3. Google has reported a data center fleet average PUE of 1.2 and I’ve been so fortunate to work on several data centers designs over the last year with annual average energy PUEs below 1.1.

While our best designs will reduce infrastructure energy to 5% of the server load, you’ll see excellent results today with existing data center case studies from simple changes in temperature and humidity settings and air containment. As our infrastructure energy uses a tiny fraction of the energy of our hardware, our focus will shift to reducing the energy use of our IT equipment, as you’ll see today in case studies of software and other IT efficiency projects.

EL Insights reports that our efficiency renaissance is expected to increase the US green data center market from about $4 billion today to nearly $14 billion in 2015. You’ll hear examples during our VC panel of what will lead future energy savings with new processors, networking and memory hardware, as well as new storage, software and lighting technologies.

We have 18 end-user presented efficiency projects and several detailed studies and discussions on various subjects to help you further improve your company’s data center efficiency. I hope you’ll pay close attention to this shared knowledge and experiences of our industry experts and participate in the question and answer periods following each session. If we continue to collaborate, innovate and further improve the energy efficiency of our data centers, we directly improve the efficiency of the Internet and the world’s economy. This will help us avoid energy challenges of the future.

We don’t need to look far back to see the value of our efficiency efforts. For it was in 2001 that the California Energy Crisis hit, and Exodus Communications my employer and the creator of the leasable data center industry was being accused as the cause of the crisis. Our energy efficiency achievements trumped these inaccurate accusations. I also worked with the SVLG to resurrect the Energy Committee that was the original founding purpose of the SVLG in the 1970’s by David Packard of HP fame to address our State’s energy challenges. My peers and I led voluntary energy curtailments to avoid countless more rolling blackouts, and Carl Guardino and I led a group of California elected leaders along with the Governor and the CEOs of nearly every major Silicon Valley company to address these energy issues. Just as we tackled these energy issues then, it is again energy efficiency leading the way with Silicon Valley companies driving the innovation of this efficiency renaissance and the SVLG driving the policy. Through this summit, Carl and the SVLG are part of this continuing effort to advance the efficiency of our Internet industries. Please help me provide Carl a warm welcome as the CEO of the Silicon Valley Leadership Group.”

Here are my closing remarks, which I am posting to make a clear recognition of those that gave their heart and time to help create this wonderful event…to them, thank you. To our attendees of the event, thank you. Together, we all gain by working together to reduce energy use and costs.

This event and efficiency program would not exist if it were not for the vision and leadership of Dale Sartor and Bill Tschudi of Lawrence Berkeley National Laboratory since the inaugural 2008 Summit. Under their direction, LBNL has led the development of this program with funding from the California Energy Commission and its PIER program, which funded some of the project management and some of these technology demonstrations.

As my favorite business school professor always, said “the most prepared team always wins.” The Silicon Valley Leadership Group, particularly Bob Hines and staff Anne Smart, Samantha James, Colin Buckner and others ensured that the preparations were in place for our event today. Thank you.

Thank you Brocade, our host for the 2010 Data Center Efficiency Summit, especially John Noh, Rebekah Johnson and Brocade’s Events planning team.

Thank you to my co-chairs, I could not have pulled this event off without any of you: Tim Crawford, Ray Pfeifer, Kelly Aaron and Brian Brogan. And also our volunteer Summit Planning Team of Subodh Bapat, Henry Wong, Deborah Grove, Mukesh Khattar, Patricia Nealon, Ralph Renne, Joyce Dickerson, Zen Kishimoto and Chris Noland.

And most importantly, kudos to all of our speakers, moderators and panel members for sharing your knowledge, vision and ideas; we all have gained from your selfish tutelage. Let me know what you’d like to see for a future summit. Now go save some energy and let’s all have a drink together to celebrate today’s success!”