North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

2008.02.18 NANOG 42 datacenter efficiencies panel notes

  • From: Matthew Petach
  • Date: Mon Feb 18 20:57:11 2008
  • Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; bh=HaQUfcKiZiZakFXjkxjLfjLYbA3fAETSVKOBAZqWRH4=; b=TpCtWtVy+u85kghaaG03rjeyOYKDPQJBVsYO5maE+zH68t6CthTIkwurdTLs0m1v43QetsebSFids2Z2KC/12rsNSKY57SuvnR4qyZrgt+U8z3WlQhqN/nVVyL/PJbfI1TljZB5nK/4mD8Q8LlTcf0+yjwk8IW8LGcGWqtBh2sk=
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=IV68MEW0Ti4T/gb0q11MDp82NLNftM2dJubPa9rlroFiLEmOfh/r0MEPTGWXA4C8BlzB/gh65QkrMikgaO35VtfzSIsDS6iIeXKK8Yzw8/haHRk68uCQC7EXyAPcImMhRitno1B9rvXUw4LhDEylyuBjsRiB+dCO9eHzQ3bryLU=

Cranking out the notes quickly before Beer and Gear
time.  :D

Apologies for typos, etc, here's the notes from Josh
and Martin's datacenter energy efficiency panel.
the slides went fast, for the content of them, check
the PDF up on the nanog site.

Matt



2008.02.18
Josh Snowhorn, Energy Efficient Datacenter Panel


He futzes with his Mac for a while, trying to get
Google Earth to work.

Fly around the country, looking at major datacenters
around the world, where they are, how to cool.

1.5M sq ft in 111 8th. hard to get cooling there,
roof is packed.

32 ave of americas, roof is full.

165 Halsey, newark,

S&D NJ facility next to power and railroad with
fiber

seacaucus, next to RR with fiber and power lines.

Equinix Ashburn, it is the place to be.

350 Cermac in Chicago, next to RR station
Coolin on the roof, fiber in tracks.

eQuinix in CHE new facility, next to airport, RR
for fiber, power.

Infomart, Dallas, equinix, S&D, next to RR, power,
fiber,

DRT in Phoenix, ots of overhead, not too far from
there to RR yard.

One wilshire in LAX, equinix has building down the
road, cooling on roof.
EQX second site, near tracks.

Westin Seattle, highrise office, cooling on roof on
garage, big hassle,

365 Main in SF, full
200 Paul, also full (DRT)
cooling on roof, everything aqueezed in, next
to tracks.

PAIX palo alto, near tracks.

3030 corvin expansion, terremark

Luncy Equinix, smart placement, building with a
lot next door; you can expan next door.

Equinix Great oaks, lots of land next door.

Space Park, DRT, fiber plant, railroad tracks
next door,

56 Marietta in Atlanta, TelX,
infrastructure packed on the roof; no power, no
cooling.

District cooling, overhead, cooled from 7,000
tons of chilled water a few blocks away.


OK, now to datacenter power trends, panelists.

Tier1Research up first.
1MW datacenter takes 177millino kwh over its life.
2008, 50% of today's datecenters won't have sufficient
power and cooling.
Enterprise will outsource and use up datacenter spaces.
Humans created 161 Exabytes of data in 2006

Colo--after dotcom bust, glut of 100watt/sqft space
came on market, DRT, CRG, DupontFabros came on, bought
them, sold them off.
They're now doing metered power
 squeezing power margins

Any2 free IX
TELX

New datacenters need 200-400w/sqft, which is exponential
growth in density.

Google, Yahoo, MS buying hydro power in remote areas.

Paying $400/rack to $1500/rack as density goes up,
and now utilities are making you pay for power
commits.

35 million servers worldwide, $110B spent running
them.

datacenters are running much, much hotter.
20kw per rack, lots of space around them, the power
load can go in, but how do you cool them?

Martin Levy points out there's a wide variety of
resources in the datacenter now; servers of all ages,
disk storage, etc.  The number of types of services
is growing dramatically, while network resources
haven't grown.

More cooling means more costs.
2.5 watts in, 1watt to equipment, 1.5w to infrastructure

utility load distribution; half goes to servers, and the
other chunk largely goes to cooling.

PUE = total facility power / IT equipment power

About half of the power is burned before any server is
turned on.

3% goes to lights alone.  Turn the blasted lights off!

More power required per system over time.
what goes into datacenter increases as per-server load
increases as well.

What people were claiming they could do 5-7 years ago
was completely unsubstantiated; now, as people are going
in and verifying that power and cooling support what is
being promised.

Some people are doing 300watt with separated hot/cold
aisles; it's definitely becoming more mainstream.

20,000sqft ghetto colo (tier 1, 2) is $15.4M
a tier IV datacenter (2N, )
20,000 sq ft is $48.4M

Bottleneck of the internet is the colo!

Lead time to build a top tier 200w a sq ft colo is
1 year (75K sq ft and up)

drive that lead time is 50+ weeks for gensets and PDUs

US colo, go big--you can put $2B in gear in a datacenter

Google DAlles Oregon 68K pod
MS Qui9nces, 470,000 sq ft, 47MW

Yahoo wenatchee and quincy, 2mil sq ft.

Terremark, 750,000 sq ft, 100MS

NCR, 55 generators, 800watt/sq ft, turbine site.

1 wind turbine, needs thousands of acres, green power
is tough.  Wave generation.
Solar tower, seville spain, great, what about night?
Green power is tough.

Flywheels instead of batteries, don't make as many
batteries.

UTC turbine generator systems
10MW requiers 200micro turbines, 5000 tons of absorbtion
chillers
80MW requires 80medium turbines, 40,000 tons of absorbtion
cooling

Panel discussion
Michael K Patterson
Subodh Bapat
Martin Levy
John, CRG,
Chris Malayter
Josh Snowhorn,
Mark Bramfitt
Edmund Elizalde

The fine detail is the really tough side; Mark talked
about it from the power company side.
PGE has pushed the power companies

Martin takes it from the server level, the chip level,
and calls Mike from Intel, a senior thermal architect.

In 2002, datacenter with top 500 computing cluster,
in top 20, 128kw of power in 25 racks.
In 2007, showed progress, that same performance in
top 500 list, could be done in a single rack with
less than 40 servers, consuming 21kw in a single rack.

challenge is how do you handle 21kw in a single rack?

progression on moores law, the single chip can do more,
but it draws more too.

the server has been commoditized the equipment; but
as it's been commoditized, the building has been
uncommoditized.
The room used to be the commodity part; now, our focus
has to shift, the servers can be brought in for low cost,
but the facility is the challenge.
Takes a lot of engineering to get to 21kw/sq ft.
GreenGrid, ClimateSavers, understand what's happening
in both those organizations.

As much as we have high performance computing with
requirements have gone sky-high, what's happening with
the power on chip now that's being produced.
power has been ramping up over time; increasing density,
but with a knee; Intel and AMD quit chasing raw performance,
shifted to chasing performance per watt; was getting too
difficult to cool.  CPUs dropping from 135watt CPU to 100watt
CPUs, esp. with multiple cores.
Power consumed with multiple cores is far less than half as
you split work up. among more low frequency cores.

So, let's go up a level, and talk about the system overall
level.

So, Subodh is up next to talk about what Sun is doing in
this space.

Even though it's a "commodity" space, there's still a
lot of technology innovation still happening there; the
commodity pricing/performance hasn't been bottomed out
yet.
There's incentives being applied to nudge the commodity
market as well.
Sun was one of the first partnerships between a hardware
manufacturer and a public utility.
Goal is to use less power power rack unit, while achieving
similar performance; use gears that self-calibrate their
power utilization.
If your gear already knows what its demand is, and how
much it needs to draw, then you can size your datacenter
appropriately.
Need intelligent, self-calibration of power draw, and
scale back energy draw based on less utilization.

What about 80plus power supplies, and at the system
level, what can be done--how much power is wasted
by running systems that aren't the more energy
effienct models; power supplies do make a huge
difference; very likely 60-70% efficient older
power supplies, so 500watt out is 1kw in; basically
just a big transformer, fundamentally.
transformers have effiency curve, can be less
efficient at lower power levels.
Sun is aiming for 90plus power supplies, with
smoother, flatter power efficiency curve.

Can replace 7 older servers with 1 newer generation
server, which reduces power and space requirements.

Chris, S&S, John, CRG, Ed, Equinix will talk about
datacenters.

John addresses new datacenter builds.
Milpitas, CA; found a property, medium sized, left
over from Worldcom, DMS200 site, 75watt/sq ft,
electromechanical wouldn't support 200wsft.
Met with PGE, selected Carrier for chilled water,
Caterpillar for generators, city of milpitas,
took 3 days to beat out best way of building the
best datacenter to make sure chilled water would
not pollute the environment, run generators at
tier 2 category at the very least.
They can tell customers they can delivery 200wsft,
will enforce good practices; hold and cold aisles,
enforced blanks, high density server installations.
They've produced cutting edge facility; needs to
be cooperative effort between cities, utilities,
the building owner, and the tenants.
Milpitas is new enough there's been no legacy hardware
going into the building.

Chris, Edmund, what do you do about legacy
customers, and how do you incentivise them?
charge them more; power rates are going up
in most markets.  Back to Josh's point, trying
to do environmentally friendly power in
urban environments requires incentives.
Edmund sits in as consulting engineer for equinix,
has dealt with utilities and customers.
Make sure you have as much backup power as you
have main power, so that you can keep your customers
up; make sure you don't overcommit the power available
for your customers.
Track power consumption for customers, make sure that
they're staying within their usage committment.
Free cooling, you still have to pay to run air
handlers, fan, etc.; not entirely free, but it helps.
UPS systems, move to SRD, aim to run your UPS at 75%
for highest efficiency, around 95%, but you can't
design for best efficiency, have to design for worst
power utilization efficiency.
Many types of cooling, with different mechanical
systems; liquid cool, air cool, etc.
Effiency also varies over time, so some days may be
60% and others 80%.

If Josh can provide green power to the datacenter,
10c/kwhr, vs 15c/kwhr, if he sold it, would we pay
1/3 more to get it?  Nobody is willing to pay more
for green power.
Austin will charge green power rate, but will
keep it flat rate for 20 years.
half of his power is hydro, not counted as
renewable or green; but has a mandate that he
hit 20% of green power; we can't generate enough
here in California, so we have to buy it from
outside.
cheap power in pacific northwest is based on existing,
underutilized assets; once the utilitization goes
back up, there will be rate shocks in store.
Mark from PGE notes that there's problems with SF
datacenters, in that transmission gear will need
to be upgraded.

Opened up to questions.

Anton Kapella, 5nines colo, .3MW--good areal shots,
none of them have solar shields, is black body not
something they need to worry about?
120,000sqft roof flates, 2 60m and 1 11m dish, how
does he put a solar shield up that will withstand
200mph winds?

200,000sqft in sacramento, ragingwire colo, took
top of roof, no HVAC or other equipment, put a
white polymer coat on it, dropped inside temperature
by 20 degrees; payback is 9-10 months.
They had a state standard to meet, but it was a
big improvement still for them--not sexy, but
very good.

High noon, incident sunlight is about 1kw sq meter,
solar panels are about 20%; will get about 150watts
per sq meter; so max on a roof is about 2MW, and
you'd have problems at night.
Probably easier to reflect the energy, rather than
absorbing it and try to use it.

Q: Owen, 0.1MW, bought a prius, it was cheaper than
buying the gas for his LX.  So his prius was free,
even without a sticker.

Q: Woody, server form factor question--he has 0MW of
datacenter, apparently.  We don't seem to care about
high density anymore the way we did years ago.
If we're trying to make things green, why did we
stop trying to make servers smaller and smaller?
Intel guy responds that they're still working on
making them smaller; 64 servers in 44u on their
newer dual core boxes.
Sun used to fit 16 blades in a 3u box; now we're
back to 1 server per U.  it's a cooling issue.
Sun notes that server vendors design for a number
of criteria; availability, reliability, redundancy,
etc.
The nature of applications people run has changed;
when you had a single blade 4" high that would fit
in a fedex envelope, you can't fit 16GB of DRAM in
that form factor, or fit an 70-150watt CPU; and
DIMMs put out huge amounts of heat; heat from
memory is starting to dwarf the amount of heat
put out by the CPUs; people are now pushing for
diskful blades with more memory.
Performance per watt seem to be the big metric
now.
The more you enclose into one chassis, the more
efficient you can be about the cooling for it.
Also, look at the number of fans in each chassis,
and how much power used just in cooling.
Could be up to 40watts of power used by fans in
a 1u, and up to 100watts of fan power in a 2u.

Classic 1u or 2u server, dual proc, how much does
the average server suck down?  Looking at some
trend data earlier in the week, they're pulling
250-300watts per dual proc server; so 30 watts
is 10% for the fans.  10% is still millions a
year.

Someone from Afilias, small power, .01MW
If you look at the whole system, transmission along,
etc.; can you avoid 30% loss of power transmission,
avoid AC to DC conversions, maybe going nuclear with
direct DC feeds?
When router/switch installs big DC gear, they get
an electrian, it's a bulk feed, well handled.
208V is easier to cool, 120 is harder to cool.
servers with DC are lots of tiny feeds.
homogeneous servers are easier to handle.

Mark notes that he's got less loss, there's about
4% loss in transmission gear; from PNW, it's about
5%; never been 30%!
Lawrence Berkeley did recent study, showed that
you can get 28% improvement by going to DC power
on legacy systems.
Can get 23% improvement by getting high-efficiency
208VAC power, though, so it's more about getting
high efficiency components!

You'll still get a loss with DC plant, there's
still a rectifier from the grid power; it's just
more centralized.

The lower voltage gets higher loss.  want 17400
as long as possible, and only go down to 480
when very close to fanout.

Q: Railroads going bankrupt because they thought
they were in railroad business vs transport
business.  How do we get greater efficiency, how
do more people handle virtualization better;
people want to be able to shut down unused
resources but be confident that they can come
up as needed.
A later panel will address virtualization more
completely; shed load, shunt usage/shift load,
another very big topic.  Some datacenters also
do chilled water reservoir at night.

Q: .25M, not sure who it is, question about moving
heat out out of server off wafer outside; server
designer just wants to get heat away from chip,
they don't think on larger scale, about moving
all the heat in a uniform direction; how do
you make sure you don't mingle cold air with
hot air?
Need server designers to come up with uniform
"cold air in here, hot air goes out there"
model.  Some do side-to-side, others go front
to back; need to get HVAC people to work with
server manufacturers to standardize on that.

Q: Gilbert, small network--if pricing goes up
for green power, would we pay for it?  When colos
are charging higher than list rate already, it's
more hesitant; but when it's decoupled, it's
harder to justify paying more when it could end
up being a lot more.

Q: Robert Ingram, datacenters being built next to
power industry, and along railroad tracks; but
why not use up the waste heat?  It's very low
grade waste heat, though.
BiGas facility, processing waste sugar cane
into power; another person speaks up about
providing power for 30,000 homes from a
landfill as a heat source.

For more info, do a quick Yahoo Search
( http://search.yahoo.com/ )
for the following phrases:
Native Energy
TerraPass
Carbon offset purchases, credits,
BioMass projects.

Gotta wrap up now, few housekeeping issues,
thanks to panelists and companies who let them
come in.

Break time now.