North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: cooling door

  • From: Wayne E. Bouchard
  • Date: Sun Mar 30 01:55:21 2008

On Sat, Mar 29, 2008 at 06:54:02PM +0000, Paul Vixie wrote:
> 
> > Can someone please, pretty please with sugar on top, explain the point
> > behind high power density? 

Customers are being sold blade servers on the basis that "it's much
more efficient to put all your eggs in one basket" without being told
about the power or cooling requirements and how not a whole lot of
datacenters really want/are able to support customers installing 15
racks of blade servers in one spot with 4x 230V/30A circuits
each. (Yes, I had that request.)

Customers don't want to pay for the space. They forget that they still
have to pay for the power and that that charge also includes a fee for
the added load on the UPS as well as the AC to get rid of the heat.

While there are advantages to blade servers, a fair number of sales
are to gullable users who don't know what they're getting into, not
those who really know how to get the most out of them. They get sold
on the idea of using blade servers, stick them into S&D, Equinix, and
others and suddenly find out that they can only fit 2 in a rack
because of the per-rack wattage limit and end up having to buy the
space anyway. (Wether it's extra racks or extra sq ft or meters, it's
the same problem.)

Under current rules for most 3rd party datacenters, one of the
principle stated advantages, that of much greater density, is
effectively canceled out.

> > Increasing power density per sqft will *not* decrease cost, beyond
> > 100W/sqft, the real estate costs are a tiny portion of total cost. Moving
> > enough air to cool 400 (or, in your case, 2000) watts per square foot is
> > *hard*.

(Remind me to strap myself to the floor to keep from becoming airborne
by the hurricane force winds while I'm working in your datacenter.)

Not convinved of the first point but experience is limited there. For
the second, I think the practical upper bound for my purposes is
probably between 150 and 200 watts per sq foot. (Getting much harder
once you cross the 150 watt mark.) Beyond that, it gets quite
difficult to supply enough cool air to the cabinet to keep the
equipment happy unless you can guarentee a static load and custom
design for that specific load. (And we all know that will never
happen.) And don't even talk to me about enclosed cabinets at that
point.

> if you do it the old way, which is like you said, moving air, that's always
> true.  but, i'm not convinced that we're going to keep doing it the old way.

One thing I've learned over the various succession of datacenter /
computer room builds and expansions that I've been involved in is that
if you ask the same engineer about the right way to do cooling in
medium and large scale datacenters (15k sq ft and up), you'll probably
get a different oppinion every time you ask the question. There are
several theories of how best to hand this and *none* of them are
right. No one has figured out an ideal solution and I'm not convinced
an ideal solution exists. So we go with what we know works. As people
experiment, what works changes. The problem is that retrofitting is a
bear. (When's the last time you were able to get a $350k PO approved
to update cooling to the datacenter? If you can't show a direct ROI,
the money people don't like you. And on a more practical line, how
many datacenters have you seen where it is physically impossible to
remove the CRAC equipment for replacement without first tearing out
entire rows of racks or even building walls?)

Anyway, my thoughts on the matter.

-Wayne

---
Wayne Bouchard
[email protected]
Network Dude
http://www.typo.org/~web/