North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: packet reordering at exchange points

  • From: Mathew Lodge
  • Date: Wed Apr 10 12:07:26 2002

At 03:48 PM 4/10/2002 +0100, Peter Galbavy wrote:
Why ?

I am still waiting (after many years) for anyone to explain to me the issue
of buffering. It appears to be completely unneccesary in a router.
Well, that's some challenge but I'll have a go :-/

As far as I can tell, the use of buffering has to do with traffic shaping vs. rate limiting. If you have a buffer on the interface, you are doing traffic shaping -- whether or not your vendor calls it that. That's because when the rate at which traffic arrives at the queue exceeds the rate that it leaves the queue, the packets get buffered for transmission some time later. In effect, the queue buffers traffic bursts and then spreads transmission of the buffered packets over time.

If you have no queue or a very small queue (relative to the Rate x Average packet size) and the arrival rate exceeds transmission rate, you can't buffer the packet to transmit later, and so simply drop it. This is rate limiting.

That's my theory, but what's the effect?

I have seen the difference in effect on a real network running IP over ATM. The ATM core at this large European service provider was running equipment from "Vendor N". N's ATM access switches have very small cell buffers -- practically none, in fact.

When we connected routers to this core from "vendor C" that didn't have much buffering on the ATM interfaces, users saw very poor e-mail and HTTP throughput. We discovered that this was happening because during bursts of traffic, there were long trains of sequential packet loss -- including many TCP ACKs. This caused the TCP senders to rapidly back off their transmit windows. That and the packet loss was the major cause of poor throughput. Although we didn't figure this out until much later, a side effect of the sequential packet loss (i.e. no drop policy) was to synchronize all of the TCP senders -- i.e. the "burstyness" of the traffic got worse because now all of the TCP senders were trying to increase their send windows at the same.

To fix the problem, we replaced the ATM interface cards on the routers -- it turns out Vendor C has an ATM interface with lots of buffering, configurable drop policy (we used WRED) and a cell-level traffic shaper, presumably to address this very issue. The users saw much improved e-mail and web performance and everyone was happy, except for the owner of the routers who wanted to know why they had to buy the more expensive ATM card (i.e. why couldn't the ATM core people couldn't put more buffering on their ATM access ports).

Hope this helps,

Mathew




Everyone seems to answer me with 'bandwidth x delay product' and similar,
but think about IP routeing. The intermediate points are not doing any form
of per-packet ack etc. and so do not need to have large windows of data etc.

I can understand the need in end-points and networks (like X.25) that do
per-hop clever things...

Will someone please point me to references that actually demonstrate why an
IP router needs big buffers (as opposed to lots of 'downstream' ports) ?

Peter
| Mathew Lodge | [email protected] |
| Director, Product Management | Ph: +1 408 789 4068 |
| CPLANE, Inc. | http://www.cplane.com |