North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: too many routes

  • From: Sean M. Doran
  • Date: Sun Sep 14 18:54:17 1997

Alan Hannan <[email protected]> writes:

> > AFAICT there is only one orderable, deployable and tested
> > IP router which observably terminates multiple 622-Mbit/s bit
> > pipes, notably that made by cisco.  
> 
>   Hmm, well, the lack of knowledge I suppose shouldn't be held
>   against you.
> 
> 	http://www.ascend.com/505.html
> 
>   lists their support for such interfaces.
> 
>   ( nonsequiter comments deleted )

I hope the comments privately prior to your writing any of
this that pointed out that nowhere in this URL or in
Ascend's other literature is the concept of non-ATM 622
mentioned.  Oh wait, that's a non sequitur, because it's
the same thing as POS, right, only with extra features.

>   On their head?  I can assure you the IGP is still a manageable
>   problem in larger ISPs with meshed backbones.  In fact, there is
>   one solution available that is not yet even exercised.

It's only manageable because Henk Smit (and before him
Dave Katz) has been going to lots of trouble to clean up
all the edge cases strangely flat backbones reveal, and
because Cisco has no trouble with the idea of adding in
richer metrics to iIS-IS.  (Well, neither do I, actually :) )

Really rich connectivity means that it is more work to
calculate new forwarding tables as point to point
connections come and go, and the results are much harder
to predict.  

Of course, if you take your approach and DON'T route
dynamically between routers, but rather reroute VCs around
link failures, then you might not have these problems
except when fun things happen like ATM switches reloading
or router-to-switch failures happen.  At that point you
have the fun of working out appropriate backup paths for
the traffic that used to fan out over a large number of
VCs.  This is non trivial and scales geometrically with
the size of the mesh.

> > try to avoid thinking too hard about how to build spanning
> > trees without building a physical VC topology for
> > multicast and can put up with interesting buffering and
> > switching effects 
> 
>   Okay, I'll bite: Which interesting buffering and switching effects?

You have to copy datagrams out to each branch of your
spanning tree.  If you have a largd number VCs out one
interface overlaid with a dense spanning tree, you end up
with large buffering requirements on that card, or
alternatively do funny queue management so that you eat
lots of queueing slots if you use an indirection list
(i.e., you use pointers pack to the multicast PDU rather
than copy the PDU multiple times, however "boxing"
datagrams, while it saves memory-to-memory copies can
make it difficult to do clever cache prefetching from
packet memory, and may not reduce the number of memory
reads you have to do, and certainly will increase CPU
use).  In other words, when using smart line cards like Wellpap
or Cisco do, if you are doing lots of multicast with a
fairly large distribution, you are better off keeping the
spanning tree small.  If you have a large number of
downstream interfaces on a box, that box works harder.  If
you have a large number of downstream interfaces on a
single card, that card works harder.  Harder can be
described as larger CPU time and more memory references in
some combination, depending on the queueing mechanism and
what does the packet copying.

Also when you have a large concentration of downward links
on a box, that box tends to attract more join and prune
messages which also keeps it busier.  You could add RSVP
issues to that too, if you thought RSVP were somehow
relevant to the Universe.

> > and, as you say, the dynamics of WHY it
> > works vs. why it doesn't work when things break.
> 
>   Yes, indeed this is a tradeoff.

Obviously you and I have different thoughts about
Byzantine failures.  I've experienced enough of them
driven by multilayer operability issues with a small set
of vendors that widening the vendor base substantially
(i.e., adding lots of switch vendors and the like, not
just adding a second router vendor) or increasing the
number of layers which have to interoperate does not
appeal much to me.

>   But, the more I think about the KISS principle, I become convinced
>   that only our own limitations, ignorance, and hesitancy prevent us
>   from adding complexity to achieve increased control.

Sure.  How much do you want to spend on NOC staff is a
factor in how much complexity NOC staff can handle, and
that is a factor in how quickly Byzantine failures can be
dealt with.

Personally, Byzantine failures that require waking up
senior smart people at multiple vendors' companies are
scary and as Byzantine failures happen sometimes in big
networks, keeping the number of people potentially needed
to effect a fix for something really weird strikes me as a
win.

In short, defensive engineering.

Sure complexity seems attractive, but only until your
first major meltdown driven directly by or even only
exacerbated by that complexity.

> > I hate to say it but I think I have become old and this
> > might explain why I like really simple and straightforward
> > failure modes.
> 
>   See above comments.

See bags under eyes and many scars.

>   Yes, we at 'bythetrees.com' are avid supporters of ATM in today's
>   technology.

Oh sure, you're bythetrees.com NOW.  --:)

>   Why do you always insist on linking one's place of business with
>   their technological idealogy? 

Because:

	a) it makes people angry, and that's really funny
	b) it reminds people whom they are working for and
	   what trade-offs they make between what they
           want and what their employers say they want
	c) a + b sometimes causes people to fix their
           employers into doing things "correctly", or
	   leads people into realizing their employers are
           hopeless and that there are other opportunities
           elsewhere (which sometimes also fixes the employer)

the latter part of (c) is particularly edifying if it
involves physical violence to doors and so forth.
	
So, needle needle, if you are at variance with UUWHO's
technical policies, why are you still there?

> Certainly I work at UUNET and my opinion is occasionally
>   involved in certain decisions.

Oh, cool, so I can blame you for brain damage.  Several
managers will be really happy now that you've volunteered
yourself to be picked on in technical lists instead of
them. :-)

>   Hmm, I tend to be atheistic about technology -- what can it do for
>   me today, and what will it to do tomorrow with a reasonable chance
>   of success.

I think you mean agnostic, not atheistic.  I distinctly
remember you comparing me to Jesus Christ.

> > There may be choice in what you can put at the end of an
> > OC12 though, but it's either something smart or something
> > you'd find at Hobson's Tavern.
> 
>   How do you define smart?

When you put your brain into a blender and unpack it into
individual cells in hopes you can put it all back together
the way it was without having noticed, that's not very
smart.

	Sean. (don't send the next cheesecake as
		breadcrumbs please :-) )