North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Internet speed report...

  • From: Simon Leinen
  • Date: Mon Sep 06 09:25:45 2004

Michael Dillon writes:
> In the paper 
> http://klamath.stanford.edu/~keslassy/download/tr04_hpng_060800_sizing.pdf

That's also in the (shorter) SIGCOMM'04 version of the paper.

> they state as follows:
> -----------------
> While we have evidence that buffers 
> can be made smaller, we haven't tested the hypothesis
> in a real operational network. It is a little difficult
> to persuade the operator of a functioning, profitable network
> to take the risk and remove 99% of their buffers. But that
> has to be the next step, and we see the results presented in
> this paper as a first step towards persuading an operator to
> try it.
> ----------------

> So, has anyone actually tried their buffer sizing rules?

> Or do your current buffer sizing rules actually match,
> more or less, the sizes that they recommend?

The latter, more or less.  Our backbone consists of 1 Gbps and 10 Gbps
links, and because our platform is a glorified campus L3 switch (Cisco
Catalyst 6500/7600 OSR, mostly with "LAN" linecards), we have nowhere
near the buffer space that was traditionally recommended for such
networks.  (We use the low-cost/performance 4-port variant of the 10GE
linecards.)

The decision for these types of interfaces (as opposed to going the
Juniper or GSR route) was mostly driven by price, and by the
observation that we don't want to strive for >95% circuit utilization.
We tend to upgrade links at relatively low average utilization -
router interfaces are cheap (even 10 GE), and on the optical transport
side (DWDM/CWDM) these upgrades are also affordable.

What I'd be interested in:

In a lightly-used network with high-capacity links, many (1000s of)
active TCP flows, and small buffers, how well can we still support the
occasional huge-throughput TCP (Internet2 land-speed record :-)?

Or conversely: is there a TCP variant/alternative that can fill 10Gb/s
paths (with maybe 10-30% of background load from those thousands of
TCP flows) without requiring huge buffers in the backbone?

Rather than over-dimensioning the backbone for two or three users (the
"Petabyte crowd"), I'd prefer making them happy with a special TCP.
-- 
Simon.