Archive for the ‘Energy’ Category

Integrating the UPS, the PDU, the rack and more

Friday, April 5th, 2019

I’ve been thinking about this for over a decade and will finally share my thoughts on what I’ve been thinking. Why do we have in our data centers, a frame, aka rack, that we then literally bolt many hardware devices to it, run power cables from each of them to separate devices, and also network cables to other separate devices, each like the hardware, also with its own network and power connections. Each of these adds devices and thus takes up available racks spaces, while also adding network connections, and adding power and network cables and cords, all taking up space, requiring routing and management of all of these cables. Obviously there is not just a cost of the cables, but an environmental impact to making them, and a cost to managing them. We spend money on cable management to make all of these cables look pretty–a tell-tell sign that the IT team has it together, much like the sign a clean desk can tell, whether falsely or accurately.

However, what if the rack, the PDU, the UPS, the networking patch panel, and all of these cables were integrated into one device? So that we didn’t spend hours cabling, wiring, attaching these, and spending money on them, while they inhibit air flow, add weight and cost, as well as entanglement and complexity to quickly discern where a problem lies. What if each piece of hardware had it’s own current transducer on the incoming power to measure and report power usage, and that the same incoming power feed also had a resettable breaker so that the power could be remotely turned on or off, much the same as the rack mounted PDU, but why is this in a separate device instead of integrated within the same hardware device that already has a network connection and ability to collect and report this information?

Why don’t we have pieces of hardware that provide all of the power transformation, rectification, energy storage, measurement, reporting, and remote control by circuit for the entire rack within the rack? Why do we need at least three different devices doing this, all far away from the actual load, i.e. a centralized UPS, a floor mounted PDU or other transformer and circuit panel, a rack mount PDU, and a power supply for voltage and AC/DC conversion? Doesn’t this seem silly to have so many different devices, and so many cables and wires and corresponding components, when all of this can easily be within one device? With one network connection instead of all of these separate and discrete network connections, translators, monitors, and other tools.

Google integrated energy storage on the hardware device back in 2002 or even earlier. It was a paradigm shift of approach, and I feel like we are on the edge of this again, in that finally energy storage is available in a way that we can integrate it into the rack or even each hardware device, and so should we also integrate the power measurement and monitoring, power control, power transformation and rectification, along with energy storage into one device or every device. That device could be the rack, it can be a separate hardware device much like our rack-mount power supplies. But overall the result is a large reduction in power cables and cable management, a reduction in network ports, and a very large reduction in number of components/devices within the power chain of the data center. At least the approach I envision does. It reduces total cables to four: two network and two power cables per rack, no others…at all. I see little to no value in all of these discreet devices remaining separate, and only benefits to them being integrated from a cost, a management, an environmental impact, and ease of use and design of our data centers. Imagine a data center that is not hamstrung by its UPS capacity or other electrical components, but instead only limited by its onsite energy generation and utility capacity. Yes, cooling capacity will be the next topic to solve, but quite frankly, I’ve been helping design solutions that have dramatically reduced cooling losses and overall cooling capacity for over a decade and I see lots of solutions to provide for future capacity increases than we have today for electrical capacity increases.

Yet, if we provide racks that have the exact power capacity the hardware needs, with the energy storage that the devices in that rack requires, and then remove all of the other “clutter” by better integrating these disparate components, we have data centers that take another leap to costing less to build and operate, are more energy efficient and essentially future-proof to the needs of the hardware and its future business uptime needs, while directly scaling the electrical infrastructure to match the concurrent needs. Why have we not yet done this?

I may be giving away the secret sauce of a great idea or that of what is already in the works by others. I’ve been talking about this idea and others related to it for years with close industry friends, and yet I am still surprised that it has not yet been done. And yet the methods to provide a better data center electrical system are right in front of us: common components, integrated together in a thoughtful yet much lower TCO. So explain to me why we are not thinking outside of the same component device boxes and advancing our data center electrical systems?

My story in Reno and receiving the Technologist of the Year award

Monday, April 3rd, 2017

A few nights ago I was honored to receive an award from NCET as Technologist of the Year. This journey started nearly 15 years ago, so I thought I would share more about it.

In 2002 I finished the build out of a colocation data center in Reno, Nevada. I never thought I would come to Reno yet an opportunity to lead a colocation data center company focused on middle sized but underserved cities was appealing for many reasons. Early with this data center in Reno I experimented with and used air economization and hot-cold aisle containment, each very unknown ways to improve data center energy efficiency, perhaps the first use of these techniques, and they did significantly reduce energy use.

Starting in 2004 and for over a decade I worked mostly remotely from Reno for Google (when we started buying, designing, building and operating internal data centers), and Equinix (the largest data center provider), DuPont Fabros (at the time second largest wholesale data center provider), Yahoo!, in which I managed global data center strategy and development at a time when we were building out large internal data centers and expansions around the globe. I also worked for BEA Systems before acquired by Oracle running their global data centers, and completed long-term marketing and product development consulting for Digital Realty, the largest wholesale data center provider, and many others, including Facebook and other Big 7 Internet companies. I call Apple, Google, Microsoft, Amazon, Facebook, Yahoo! and eBay the Big 7, as they build, own and operate the majority of data centers, outspending data center capital every year of all of the colocation providers by a factor of almost 10. I have been lucky enough to work with five of these seven big data center companies.

In the midst of this, myself and others worked together to create and build the Reno Technology Park (RTP), the largest dedicated data center campus known at the time, located just outside of Reno in Washoe County. I worked with many companies to influence them to locate a future data center in Reno, and secured Apple as the first tenant of the RTP.

While maintaining a residence in the Reno area, with its very close proximity to Lake Tahoe, fabulous skiing, mountain biking, cycling and other activities that I love to do and have spent much time in the area for years playing. Yet with a home in the area, I avoid the congestion and high cost of living of the SF Bay Area and also a state income tax. There are many workers in technology companies that live and work in the Reno area and many like me that live in the Reno-Tahoe area yet commute to the Bay Area or elsewhere for work as needed, including executives of technology companies.

Because of the many great companies and people working in the Reno area I am even more humbled to receive this award. Thank you NCET and the board for this recognition and Abbi Whitaker for her nomination. Having developed data centers in over 20 countries and data center site selections in almost 30 countries as well as throughout the United States, I saw that Reno Nevada was a good place to locate data centers, and that they would be great for the local economy. I wanted to bring my industry to my home, and see the local economy continuing to grow and evolve.

I commend the team at EDAWN, Governor Sandoval and his staff including Steve Hill for helping to make these wins. I look forward to continuing to work with our community, all of you, NCET and EDAWN to see Reno’s economy grow and develop.

Apple+Reno+Solar = “Controllable Power”

Monday, July 8th, 2013

Some of you know that I have developed the Reno Technology Park along with a few others. I am the sole data center expert in the group and when I first viewed the property, I saw that it had potential as a site for data centers with the property being laced with electricity and natural gas transmission lines, main fiber routes crossing thru the property, and proximity to clean power plants. However, that infrastructure was not enough to sway me to get involved. The project needed lower cost power and tax options.

At my insistence, we created some unique tax incentives, but as a data center power guy for nearly two decades negotiating power deals and developing power plants, I saw the real potential was for clean, “controllable” power. I brought Apple to the site last spring and they too saw the same potential.

Fast forward now just over a year, and Apple has one operational data center building, a second data center building fast approaching commissioning, and now an announcement of a nearby 18-MegaWatt solar project near the Reno Technology Park. Here are some links to public articles about these announcements:
http://www.macrumors.com/2013/03/27/first-phase-of-apples-new-reno-nevada-data-center-ready-to-open/
http://www.datacenterknowledge.com/archives/2013/03/27/apple-ready-to-roll-in-reno-with-a-coop/
http://www.rgj.com/videonetwork/2264915824001?odyssey=mod%7Ctvideo2%7Carticle
http://www.datacenterknowledge.com/archives/2013/07/02/apple-planning-solar-farm-next-to-planned-reno-nevada-data-center/
http://www.computerworld.com/s/article/9240559/Apple_unveils_18_megawatt_solar_farm_to_power_cloud_data_center?source=CTWNLE_nlt_pm_2013-07-03)

Being under NDA with Apple, I cannot expand upon these articles with information from other sources. So let’s talk about what I mean by “controllable power”. The ability to take control of what I call the “Three C’s”: cost, capacity and control. Control being the deliverability, schedule and mix of that power, as well as controlling the future cost of the electricity. Cost being current and future costs, as when we plan to operate a data center, we must take into account the total electricity cost over the expected life, usually 10-20 years. And ideally, we don’t just want a low cost today, but more importantly a low average cost over that life cycle. I see too many folks run to a market with low-cost electricity today but not realize that those low costs will go up, and often within 1-3 years and to an average much higher than other location options. Predicting and seeing these future costs is one of the key advantages to using MegaWatt Consulting for your data center site selections and not another company, as I do not see any other company looking at all of the factors that will influence future data center costs like we do. Do you want to choose a site that has great costs before you start constructing yet high costs by the time you fill it and be surprised that your site is not a low cost site a few years from now, or go to a site that will continue to provide low costs for years to come?
And capacity is key, as there is a cost to bringing power capacity to a project and sometimes that is enormous. For example, a few years ago I was consulting for Equinix and the cost they were quoted by the utility to bring power capacity to a site was equal to nearly one-third of the construction cost for an entire new and large data center! That would have added nearly 50% to the total construction budget! I was able to negotiate that down to less than 10% of the total project budget, but still a very large expense and one that is often not accounted for during site selection TCO estimates. All proving the point that controllability of power over time–each it’s cost, capacity, mix and deliverability—provide significant benefits to a company and it’s costs over time.

Whether or not Apple is responding to pressure from Greenpeace, NY Times’ articles, their stockholders, consumers or other shareholders, having a data center site that can provide flexibility for the many factors over time is key to adjust to changing needs. Whether those needs are costs, the fuel mix, deliverability or reliability of that power, all provide significant benefits when they can be controlled to meet changing needs over time. And all needs change over time, and being that electricity cost drives a 10-year Net-Present Value analysis of data center ownership, “controllable power” is essential to good data center cost management.

If you’d like “to take control” of your data center’s a key driver of current and future costs, as well as combat changing pressures from shareholders, markets and other factors, let’s talk about some options.

Coal Burning Power Plants must Finally Reduce Mercury emission

Thursday, March 1st, 2012

Coal burning power plants account for the vast majority of the mercury that we contact. I’ve read statistics that 80-95% of the mercury that we contact comes from coal burning power plants. In the US, it is estimated that coal-fired power plants are responsible for half of the nation’s mercury emissions.

The mercury in the emissions literally rains down on the oceans and land falling on crops that we eat, in the rivers and oceans that we fish, and on our backyards and into our lungs. Mercury leads to many very serious mental and physical disorders.

“According to the U.S Environmental Protection Agency, mercury is responsible for thousands of premature deaths and heart attacks. It can also damage children’s nervous systems and harm their ability to think and learn. The mercury, in essence, falls back to earth where it gets into the food chain.” (energy biz, “Obama Showers Coal with Mercury Rule”, Jan 3, 2012–http://www.energybiz.com/article/12/01/obama-showers-coal-mercury-rule). I’ve read in EPA reports that there is estimated to be 50,000 pre-mature deaths every year in the US due to the emissions from coal-burning power plants. Imagine loosing an entire city of 50,000 people every year? That is a city in population not much different than Palo Alto, CA. And that figure does not count the number of lung-related issues such as asthma that develop from these emissions.

Well, the Clean Air Act provides each of us the right to clean air. As such, in December, 2011, “the EPA carried out its obligation under the 1990 Clean Air Act and demanded that coal-fired power plants implement the available technologies to reduce their emissions by 90 percent.”

These regulations are not a shock to most utilities, as they have been aware of the pending regulations for some time (since the clean air act was put into law), and most utilities actually support the law as it allows them to shut down old coal-fired power plants, which are a financial, legal and environmental liability in exchange for building new, cleaner burning and more efficient power plants. These new regulations really only affect coal plants that were constructed 30 to 50 years ago. The operators can choose to have them meet the new requirements or shut down and replace them with new, more efficient and less polluting plants– a decision compelled not just by the new regulations but also by the need to compete with lower cost shale gas. Since most utilities in the US get a return on building new infrastructure, it is good business to build new power plants. Essentially, it sets a more level playing field to the 1,400 coal-fired US power plants and ends 20 years of uncertainty about these regulations.

Will these new regulations cause electricity prices to increase? Yes, but not likely significantly, as the “EPA estimates that the cost of carrying out the new mercury rules will be about $9.6 billion annually. But it also says that payback will be as much as $90 billion by 2016 when all power plants are expected to be in compliance, or closed. The agency expects “small changes” in the average retail electricity rates, noting that the shift to abundant shale-gas will shield consumers.” I agree with that assessment, as shale-gas will keep prices down. Even though “The American Coalition for Clean Coal Electricity says that the new mercury rule, in combination with other pending coal-related regulations, will increase electricity prices by $170 billion” through 2020, a estimate not much different than the EPA’s and also one to likely have a very minimal affect on electricity prices since it is such a small percentage of total electricity spend per year.

The same group says that “Coal helps make electricity affordable for families and businesses,” says Steve Miller, chief executive of the coal group. “Unfortunately, this new rule is likely to be the most expensive rule ever imposed on coal-fueled power plants which are responsible for providing affordable electricity.” Of course, when one accounts for health-related costs, the new emissions rules are far less costly than paying for your son’s asthma medicine and your father’s lung cancer treatments. Finally, we are getting slightly cleaner air, something the clean air act provided to us by law over 40 years ago.

Call for Case Studies and Data Center Efficiency Projects

Wednesday, February 15th, 2012

As many of you know, I have chaired what has become known as the SVLG Data Center Efficiency Summit since the end of it’s first year’s program. That was fall of 2008. A wonderful summit held at Sun Microsystem’s Santa Clara campus. This has been a customer-focused, volunteer-driven project with case studies presented by end-users about their efficiency achievements. The goal is for all case studies to share actual results of the savings to show what works, best ways to improve efficiency and to provide ideas and support for all kinds of efficiency improvements within our data centers. We’ve highlighted software, hardware and infrastructure improvements, as well as new technologies and processes, in the effort that we all gain when we share. Through collaboration we all improve. And as an industry, if we all improve, we avoid over-regulation, we all help to preserve our precious energy supplies and keep their costs from escalating as quickly. We all help to reduce emissions generated as an industry and drive innovation. In essence, we all gain when we share ideas with each other.

As such, I have thought of this program to be immensely valuable as an industry tool to efficiency and improvement for all. Consequently, I have volunteered hundreds of hours of my time and forgiven personal financial gain to chair and help advance this program along with many other volunteers who have also given much of their time to advance this successful and valuable program. I do not have the resources to continually give of my volunteer time–I wish I did–but do hope to provide more support or time with future corporate sponsorship.

I do hope that you can participate in this valuable program and the corresponding event held in the late fall every year since 2008. Below is more information from the SVLG. You can also call me for more info.

Attention data center operators, IT managers, energy managers, engineers and vendors of green data center technologies: A call for case studies and demonstration projects is now open for the fifth annual Data Center Efficiency Summit to be held in November 2012.

The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and the Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users – not marketing pitches.

We welcome case studies presented by an end user or customer. If you are the vendor of an exciting new technology, please work with your customers to submit a case study. Case studies of built projects with actual performance data are preferred.

Topics to consider:
Energy Efficiency and/or Demand Response
Efficient Cooling (Example: Liquid Immersion Cooling)
Efficient Power Distribution (Example: DC Power)
IT Impact on Energy Efficiency (Example: Energy Impact of Data Security)
Energy Efficient Data Center Operations
In the final version of your case study, you will need to include:
Quantifiable savings in terms of kWh savings, percentage reduction in energy consumption, annual dollar savings for the data center, or CO2 reduction
Costs and ROI including all implementation costs with a breakdown (hardware, software, services, etc) and time horizon for savings
Description of site environment (age, size or load, production or R&D use)
List of any technology vendors or NGO partners associated with project
Please submit a short (1 page or less) statement of interest and description of your project or concept by March 2, 2012 to asmart@svlg.org with subject heading: DCES12. Final case studies will need to be submitted in August 2012. Submissions will be reviewed and considered in the context of this event.
Interested in setting up a demonstration project at your facility? We may be able to provide technical support and independent evaluation. Please call Anne at 408-501-7871 for information.