Thinking Outside The Case

Nice Rack!

Note: The below is a straight off-the-top-of-my head rant I dashed off to my editor at a technology journal I occasionally write for. I'm looking for feedback to tighten it up. Feel free to tear it apart!

When it comes to data center metrics the one most often talked about is square footage. Nobody ever announces that they’ve built a facility with Y-tons of cooling, or Z-Megawatts. The first metric quoted is X-square feet. Talk to any data center manager however and they’ll tell you that floor space is completely irrelevant these days. It only matters to the real estate people. All that matters to the rest of us is power and cooling – Watts per square foot. How much space you have available is nowhere near as important as what you can actually do with it.

If you look at your datacenter with a fresh eye, where is the waste really happening?

Since liquid-cooled servers are at the far right-hand side of the bell curve, achieving electrical density for the majority of us is usually a matter of effectively moving air. So what is REALLY preventing the air from moving in your data center? I won’t rehash the raised floor vs. solid floor debate (since we all know that solid floors are better) but even I know that the perforated tiles, or the overhead duct work is not the REAL constraint. A lot of folks have focused a lot of energy on containment; hot aisle containment systems, cold aisle containment systems, and even in-row supplemental cooling systems.

In reality however, all of these solutions are addressing the environment around the servers, not the servers themselves which are after all, the source of all the heat. Why attack symptoms? Let’s go after the problem directly: The server.

First of all, the whole concept of a “rack unit” needs to be discarded. I’ve ranted before on the absurdity of 1U servers, and how they actually decrease datacenter density when deployed as they are currently built. I’d like to take this a step further and just get rid of the whole idea of a server case. Wrapping a computer in a steel and plastic box, a constrained space, a bottleneck for efficient airflow is a patently absurd thing. It was a good idea in the day of 66 Mhz CPUs and hard drives that were bigger than your head, but in today’s reality of multi-core power hogs burning like magnesium flares it is just asking for trouble. Trouble is what we’ve got right now. Trouble in the form of hot little boxes, be they 1U or blade servers. They are just too much heat in too constrained spaces. Virtualization won’t solve this problem. If anything it will just make it worse by increasing the efficiency of the individual CPUs making them run hotter more of the time. Virtualization might lower the power bills of the users inside the server, but it won’t really change anything for the facility that surrounds the servers in question. The watts per square foot impact won’t be as big as we hoped and we’ll still be faced with cooling a hot box within a constrained space.

So here is my challenge to the server manufactures: Think outside of the case.

This isn’t a new idea really, nor is it mine. We’ve all seen how Google has abandoned cases for their servers. Conventional wisdom says that only a monolithic deployment such as a Google datacenter can really make use of this innovation. Baloney. How often does anyone deploy single servers anymore? Hardly ever. If server manufacturers would think outside of the case, they could design and sell servers in 10 or 20 rack unit scale enclosures. They could even sell entire racks. By shedding cases altogether, both server cases and blade chassis, they could create dense, electrically simple, easy to maintain, and most importantly easy to cool servers. The front could be made of I/O ports, fans, and drives. Big fans for quiet efficiency. The backs could be left open, with electrical down one side and network connections down the other. Minimize the case itself to as little as possible… think of Colin Chapman‘s famous directive about building a better race car: “Just add lightness.” The case of a server should serve one purpose only: To anchor it to the rack. Everything else is a superfluous obstruction of airflow. No need for steel, as plenty of lighter weight materials exist that can do the job with less mass.

Go look in your datacenter with this new eye and envision all those server cases and chassis removed. No more artificial restriction of airflow. Your racks also weigh less than half of what they do today. You could pack twice the computing horsepower into the same amount of space and cool it more effectively than what you have installed.

Ten years from now we’ll look back at servers of this era and ask ourselves “what were we thinking??” The case as we know it will vanish from the data center, much like the horse and buggy a century before. We’ll be so much better without them.