North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: UUNET Routing issues

  • From: Iljitsch van Beijnum
  • Date: Fri Oct 04 17:53:34 2002

On Fri, 4 Oct 2002, Petri Helenius wrote:

> >Kind of an arms race between the routers and the hosts to see which can
> >buffer more data.

> You usually end up with 64k window with modern systems anyway. Hardly
> anything uses window scaling bits actively.

I also see ~17k a lot. I guess most applications don't need the extra
performance offered by the larger windows anyway.

> Obviously by dropping select packets
> you can keep the window at a more moderate size. Doing this effectively would
> require the box to regocnize flows which is not feasible at high speeds.

I think random early detect works reasonably well. Obviously something
that really looks at the sessions would work better, but statistically,
RED should work out fairly well.

> >Also, well-behaved TCP implementations shouldn't send a full window worth
> >of data back to back. The only way I can see this happening is when the
> >application at the receiving end stalls and then absorbs all the data
> >buffered by the receiving TCP at once.

> I didn�t want to imply that the packets would be back to back in the queue
> but if you have a relatively short path with real latency in order of few tens
> of milliseconds and introduce extra 1000ms to the path, you have a full window
> of packets on the same queue. They will not be adjacent to each other but
> they would be sitting in the same packet memory.

The only way this would happen is when the sending TCP sends them out back
to back after the window opening up after having been closed. Under normal
circumstances, the sending TCP sends out two new packets after each ACK.
Obviously ACKs aren't forthcoming if all the traffic is waiting in buffers
somewhere along the way. Only when a packet gets through an ack comes back
and a new packet (or two) is transmitted.

Hm, but a somewhat large number of packets being released at once by a
sending TCP could also happen as the slow start threshold gets bigger.
This could be half a window at once.

> >Under normal circumstances, the full window worth of data will be spread
> >out over the entire path with no more than two packets arriving back to
> >back at routers along the way (unless one session monopolizes a link).

> This discussion started as a discussion of non-normal circumstances. Not sure
> if the consensus is that congestion is non-normal. It�s very complicated
> to agree on metrics that define a "normal" network. Most people consider
> some packet loss normal and some jitter normal. Some people even accept
> their DNS to be offline for 60 seconds every hour for a "reload" as normal.

Obviously "some" packet loss and jitter are normal. But how much is
normal? Even at a few tenths of a percent packet loss hurts TCP
performance. The only way to keep jitter really low without dropping large
numbers of packets is to severly overengineer the network. That costs
money. So how much are customers prepared to pay to avoid jitter?

In any case, delays of 1000 ms aren't within any accepted definition of
"normal". With these delays, high-bandwidth batch applications will
monopolize the links and interactive traffic suffers. 20 ms worth of
buffer space with RED would keep those high-bandwidth applications in
check and allow a reasonable degree of interactive traffic. Maybe a
different buffer size would be better, but the 20 ms someone mentioned
seems as good a starting point as anything else.