North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: CIDR Report

  • From: Adrian Chadd
  • Date: Mon May 15 11:15:19 2000

On Mon, May 15, 2000, Chris Williams wrote:


> Also, I don't really buy the "how do we manage 250K routes?" arguement.
> Any well-designed system which can effectively manage 10K of something,
> in general, ought to be able to handle 250K of it; it's just a question
> of scaling it up, and there's no question that processors and getting
> faster and memory cheaper every day. If there's some magic number of
> routes that suddenly becomes unmanagable, I'd love to hear why.

I agree with everything you said about /24 multihoming except this.
If it were just a question of scaling it up to be faster, then
it would have been solved for now and we wouldn't be discussing it.
"Making BGP go faster" isn't a "throw more RAM and CPU at it", its
a "Actually research the problem with the data we have today and
develop new solutions which solve these problems." People in general
have this strange concept of well designed; there is no absolute
concept of well designed, only "well designed with a given set of
data and a given level of knowledge". BGPv4 was designed with different
goals, different data and with different ways of thinking, so how
do you expect it to scale with _todays_ demands?

"Faster BGP" and "Handling 250k routes" is not just a function of
CPU speed and memory capacity. You have to consider network topology,
latency/packetloss, router software (as well as hardware, so you can
throw in your CPU and hardware here), peering patterns, route
filtering, IGP/iBGP behaviour and some liberal application of fairy

Read Craig and Abha's presentations on BGP convergence at the last
two NANOG meetings. It might shed some light on the issues and
behaviour involved.