I’ve been thinking about this for over a decade and will finally share my thoughts on what I’ve been thinking. Why do we have in our data centers, a frame, aka rack, that we then literally bolt many hardware devices to it, run power cables from each of them to separate devices, and also network cables to other separate devices, each like the hardware, also with its own network and power connections. Each of these adds devices and thus takes up available racks spaces, while also adding network connections, and adding power and network cables and cords, all taking up space, requiring routing and management of all of these cables. Obviously there is not just a cost of the cables, but an environmental impact to making them, and a cost to managing them. We spend money on cable management to make all of these cables look pretty–a tell-tell sign that the IT team has it together, much like the sign a clean desk can tell, whether falsely or accurately.
However, what if the rack, the PDU, the UPS, the networking patch panel, and all of these cables were integrated into one device? So that we didn’t spend hours cabling, wiring, attaching these, and spending money on them, while they inhibit air flow, add weight and cost, as well as entanglement and complexity to quickly discern where a problem lies. What if each piece of hardware had it’s own current transducer on the incoming power to measure and report power usage, and that the same incoming power feed also had a resettable breaker so that the power could be remotely turned on or off, much the same as the rack mounted PDU, but why is this in a separate device instead of integrated within the same hardware device that already has a network connection and ability to collect and report this information?
Why don’t we have pieces of hardware that provide all of the power transformation, rectification, energy storage, measurement, reporting, and remote control by circuit for the entire rack within the rack? Why do we need at least three different devices doing this, all far away from the actual load, i.e. a centralized UPS, a floor mounted PDU or other transformer and circuit panel, a rack mount PDU, and a power supply for voltage and AC/DC conversion? Doesn’t this seem silly to have so many different devices, and so many cables and wires and corresponding components, when all of this can easily be within one device? With one network connection instead of all of these separate and discrete network connections, translators, monitors, and other tools.
Google integrated energy storage on the hardware device back in 2002 or even earlier. It was a paradigm shift of approach, and I feel like we are on the edge of this again, in that finally energy storage is available in a way that we can integrate it into the rack or even each hardware device, and so should we also integrate the power measurement and monitoring, power control, power transformation and rectification, along with energy storage into one device or every device. That device could be the rack, it can be a separate hardware device much like our rack-mount power supplies. But overall the result is a large reduction in power cables and cable management, a reduction in network ports, and a very large reduction in number of components/devices within the power chain of the data center. At least the approach I envision does. It reduces total cables to four: two network and two power cables per rack, no others…at all. I see little to no value in all of these discreet devices remaining separate, and only benefits to them being integrated from a cost, a management, an environmental impact, and ease of use and design of our data centers. Imagine a data center that is not hamstrung by its UPS capacity or other electrical components, but instead only limited by its onsite energy generation and utility capacity. Yes, cooling capacity will be the next topic to solve, but quite frankly, I’ve been helping design solutions that have dramatically reduced cooling losses and overall cooling capacity for over a decade and I see lots of solutions to provide for future capacity increases than we have today for electrical capacity increases.
Yet, if we provide racks that have the exact power capacity the hardware needs, with the energy storage that the devices in that rack requires, and then remove all of the other “clutter” by better integrating these disparate components, we have data centers that take another leap to costing less to build and operate, are more energy efficient and essentially future-proof to the needs of the hardware and its future business uptime needs, while directly scaling the electrical infrastructure to match the concurrent needs. Why have we not yet done this?
I may be giving away the secret sauce of a great idea or that of what is already in the works by others. I’ve been talking about this idea and others related to it for years with close industry friends, and yet I am still surprised that it has not yet been done. And yet the methods to provide a better data center electrical system are right in front of us: common components, integrated together in a thoughtful yet much lower TCO. So explain to me why we are not thinking outside of the same component device boxes and advancing our data center electrical systems?