North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

RE: High Density Multimode Runs BCP?

  • From: Scott McGrath
  • Date: Wed Jan 26 18:45:51 2005

Hi, Thor

We used it to create zone distribution points throughout our datacenter's
which ran back to a central distribution point.   This solution has been
in place for almost 4 years.   We have 10Gb SM ethernet links traversing
the datacenter which link to the campus distribution center.

The only downsides we have experienced are

1 - Lead time in getting the component parts

2 - easiliy damaged by careless contractors

3 - somewhat higher than normal back reflection
    on poor terminations

                            Scott C. McGrath

On Wed, 26 Jan 2005, Hannigan, Martin wrote:

>
>
> > -----Original Message-----
> > From: Thor Lancelot Simon [mailto:[email protected]]
> > Sent: Wednesday, January 26, 2005 3:17 PM
> > To: Hannigan, Martin; [email protected]
> > Subject: Re: High Density Multimode Runs BCP?
> >
> >
> > On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
> > > > >
> > > > > When running say 24-pairs of multi-mode across a
> > datacenter, I have
> > > > > considered a few solutions, but am not sure what is
> > > > common/best practice.
> > > >
> > > > I assume multiplexing up to 10Gb (possibly two links
> > thereof) and then
> > > > back down is cost-prohibitive?  That's probably the
> > "best" practice.
> > >
> > > I think he's talking physical plant. 200m should be fine. Consult
> > > your equipment for power levels and support distance.
> >
> > Sure -- but given the cost of the new physical plant installation he's
> > talking about, the fact that he seems to know the present maximum data
> > rate for each physical link, and so forth, I think it does
> > make sense to
> > ask the question "is the right solution to simply be more economical
> > with physical plant by multiplexing to a higher data rate"?
> >
> > I've never used fibre ribbon, as advocated by someone else in
> > this thread,
> > and that does sound like a very clever space- and possibly cost-saving
> > solution to the puzzle.  But even so, spending tens of thousands of
> > dollars to carry 24 discrete physical links hundreds of
> > meters across a
>
> Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
> is $25 per splice per strand including termination. The 100m
> patch chords are $100.00. It's cheaper to bundle and splice.
>
> How much does the mux cost?
>
>
> > datacenter, each at what is, these days, not a particularly high data
> > rate, may not be the best choice.  There may well be some
> > question about
> > at which layer it makes sense to aggregate the links -- but to me, the
> > question "is it really the best choice of design constraints to take
> > aggregation/multiplexing off the table" is a very substantial one here
> > and not profitably avoided.
>
> Fiber ribbon doesn't "fit" in any long distance (+7') distribution
> system, rich or poor, that I'm aware of. Racks, cabinets, et. al.
> are not very conducive to it. The only application I've seen was
> IBM fiber channel.
>
> Datacenters are sometimes permanent facilities and it's better,
> IMHO, to make things more permanent with cross connect than
> aggregation. It enables you to make your cabinet cabling and
> your termination area cabling almost permanent and maintenance
> free - as well as giving you test,add, move, and drop. It's more
> cable, but less equipment to maintain, support, and reduces
> failure points. It enhances security as well. You can't open
> the cabinet and just jack something in. You have to provision
> behind the locked term area.
>
> I'd love to hear about a positive experience using ribbon cable
> inside a datacenter.
>
>
> >
> > Thor
> >
>