North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

RE: rack power question

  • From: Frank Bulk - iNAME
  • Date: Mon Mar 24 23:59:22 2008

Thanks for the spelling it out in more detail.  One point I failed to make
was that as power consumption and heat/sq.ft increases, the cost to
dissipate that heat appears to reach a cost/performance curve which then
swings up dramatically.  There appears to be a sweet spot where it's cheaper
to spread the power consumption/heat dissipation around with more racks than
invest in products that solve those density problems.  And that sweet spot
is a moving target as vendors come up with products to address the density
problems.  So rather than argue about how much we can pack in, perhaps we
should find the number with the maximum cost/benefit for the data center
owner/operator, taking into the necessary variables.  Previously in the
thread the discussion was around identifying the highest number possible.

Also, if one designs for the highest density technically possible, they're
building an infrastructure that solves expensive power/heat density issues
that won't exist for all customers, which translates into higher cost/sq
foot when the sales team may only be able to earn prices that are equivalent
to those who designed for 75% of their density capabilities. Again, I'm not
sure what that upper-level number is, but it's there.  Is the solution to
segregate the data center into different tiers of low power/heat and those
that need higher power/density?  Perhaps people shouldn't be selling U's,
but selling power consumption and heat dissipation (try and measure that!)
and charging a nominal fee for U's.

Please feel free to set me straight as I'm rambling on about something I
don't know about. =)

Frank

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of
Deepak Jain
Sent: Monday, March 24, 2008 10:27 PM
Cc: [email protected]
Subject: Re: rack power question



While I enjoy hand waving as much as the next guy... reading over this
thread, there are several definitions of sq ft (ft^2) here and folks are
interchanging their uses whether aware of it or not.

1) sq ft = the amount of sq ft your cabinet/cage sits on.

2) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor including aisles and access-ways

3) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor including aisles and access-ways and on-the-floor
cooling equipment

4) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor  including aisles and access-ways and on-the-floor
cooling equipment AND the amount attributed to your cabinet/cage from
the equipment room (UPS, batteries, transformers, etc).

The first definition only applies to those renting cabinets.
The first/second definitions apply to those renting cabinets and cages
with aisles or access-ways in them
The first/second/third definitions apply to operators of datacenters
within non-datacenter buildings (where datacenter is NOT the entire load
in the facility) and renters.
All the definitions apply to anyone with a dedicated datacenter space
(and equipment room) within a building or a stand-alone datacenter.

By rough figuring...

A 30KW cabinet while one sounds lovely, a huge amount of space is going
to turned over to most or all of a dedicated PCU and 1/15th of the
infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers,
etc.

Assuming power costs and associated maintenance are assigned
appropriately to this one cabinet, the amount of square footage
associated (definition #4) for that one cabinet changes by less than 30%
whether you are going 30KW in one-cabinet or 3KW in each of 10 cabinets.

As an owner/operator of very large dedicated data centers for very large
customers of all sorts, I can promise you no one is doing datacenters
full (500+ cabinets) of 10KW+ (production, not theoretical) each in a
dedicated facility with no other uses to lower the average heat demand.
Even smaller numbers probably too.

Easy caveat:

A "datacenter" that is a fraction of a large building (e.g. a 20,000 sq
ft data center within a 250,000 sq ft building) can appear to bend these
rules because the overall load (by definition #4) is averaged against it.

There is simply no economic reason to do so (at scale) -- short of water
cooling -- there is a fixed amount of space taken up per unit-ton of air
cooling (medium-<air>-medium) for heat-rejection. Factor in the premiums
associated with the highest density equipment (e.g. blades, PDUs
-in-cabinet, etc) and the economics become even clearer.

Even ignoring heat rejection, the battery + UPS gear for 500KVA (even
with minimal battery times) is approximately the same size (physically)
as the 12 cabinets or so it takes to reach that capacity.  [same applies
for flywheel/kinetic systems]

Our friends who do calculus in their heads can already figure out the
engineering or business min-max equation to optimize this equation based
on a certain level of redundancy, run-time, etc and there aren't
multiple answers. (Hint: certain variables drop out as rounding errors).

TAANSTAFL, if you are a 1-4 cabinet (or similarly small) use in a larger
datacenter (definitions 1-2) by all means shove as much gear as you can
in as long as there is no additional power premium. If they are giving
you space for power or the premium is too high, take as much space as
you can for the amount of power you need -- your equipment and your
budgets will thank you. If you are operating a data center without a
bigger use in the building to average against, you really don't have
many ways to cheat the math here. (e.g. geothermal only provides a delta
between definition #3 and #4 and a lower energy premium).

Deepak Jain
AiNET