North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

RE: cooling door

  • From: michael.dillon
  • Date: Mon Mar 31 11:25:45 2008

> Here is a little hint - most distributed applications in 
> traditional jobsets, tend to work best when they are close 
> together. Unless you can map those jobsets onto truly 
> partitioned algorithms that work on local copy, this is a 
> _non starter_. 

Let's make it simple and say it in plain English. The users
of services have made the decision that it is "good enough"
to be a user of a service hosted in a data center that is
remote from the client. Remote means in another building in
the same city, or in another city.

Now, given that context, many of these "good enough" applications
will run just fine if the "data center" is no longer in one
physical location, but distributed across many. Of course,
as you point out, one should not be stupid when designing such
distributed data centers or when setting up the applications
in them.

I would assume that every data center has local storage available
using some protocol like iSCSI and probably over a separate network
from the external client access. That right there solves most of
your problems of traditional jobsets. And secondly, I am not suggesting
that everybody should shut down big data centers or that every
application
should be hosted across several of these distributed data centers.
There will always be some apps that need centralised scaling. But
there are many others that can scale in a distributed manner, or
at least use distributed mirrors in a failover scenario.

> No matter how much optical technology you have, it will tend 
> to be more expensive to run, have higher failure rates, and 
> use more power, than simply running fiber or copper inside 
> your datacenter. There is a reason most people, who are 
> backed up by sober accountants, tend to cluster stuff under one roof.

Frankly I don't understand this kind of statement. It seems 
obvious to me that high-speed metro fibre exists and corporate 
IT people already have routers and switches and servers in the
building, connected to the metro fiber. Also, the sober accountants
do tend to agree with spending money on backup facilities to
avoid the risk of single points of failure. Why should company A
operate two data centers, and company B operate two data centers,
when they could outsource it all to ISP X running one data center
in each of the two locations (Company A and Company B).

In addition, there is a trend to commoditize the whole data center.
Amazon EC2 and S3 is not the only example of a company who does
not offer any kind of colocation, but you can host your apps out
of their data centers. I believe that this trend will pick up 
steam and that as the corporate market begins to accept running
virtual servers on top of a commodity infrastructure, there is 
an opportunity for network providers to branch out and not only
be specialists in the big consolidated data centers, but also
in running many smaller data centers that are linked by fast metro
fiber.

--Michael Dillon