North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Question about propagation and queuing delays

  • From: Richard A Steenbergen
  • Date: Mon Aug 22 14:18:56 2005

On Mon, Aug 22, 2005 at 11:41:31AM -0400, Patrick W. Gilmore wrote:
> 
> I think the key here is "when you are suffering congestion".
> 
> RS said that queueing delay is irrelevant when the link was between  
> 60% and > 97% full, depending on the speed of the link.  If you have  
> a link which is more full than that, queueing techniques matter.
> 
> Put another way, queueing techniques are irrelevant when the queue  
> size is almost always <= 1.

Well, the reality is that there is no such thing as a "50% used" circuit. 
A circuit is either 0% used (not transmitting) or 100% used (transmitting) 
at any given moment, what we are really measuring is the percentage of 
times the circuit was being utilized over a given time period (as in 
"bits per second").

If you want to send a packet, and the circuit is being utilized, you get 
shoved into a queue. If the circuit is so slow that serialization delays 
are massive (aka 500ms until your packet gets transmitted), you're going 
to notice it. If the serialization delay is microseconds, you're probably 
not going to care, as the packet is going to be on its merry way "soon 
enough".

Now say you've got a packet coming in and waiting to be transmitted out an 
interface. Below "50% utilized", the odds are pretty low that you're going 
to hit much of anything in the queue at all. Between around "60% utilized" 
to "97% utilized" the chances of hitting something in the queue start to 
creep up there, but on a reasonably fast circuit this is still barely 
noticable (less than a milisecond of jitter), not enough to get noticed in 
any actual application. As you start to get above that magic "97% 
utilized" number (or whatever it actually is, I forget offhand) the odds 
of hitting something in the queue before you start becoming really really 
good. At that point, the queue starts growing very quickly, and the 
latency induced by the queueing delays starts skyrocketing. This continues 
until either a) you exhaust your queue and drop packets (called "tail 
drop", when you blindly drop whatever there isn't room for in the queue), 
or b) otherwise force the flow control mechanisms of the higher level 
protocols (like TCP) to slow down.

Plenty of folks who enjoy math have done lots of research on the subject, 
and there are lots of pretty graphs out there. Perhaps someone will come 
up with a link to one. :)

-- 
Richard A Steenbergen <[email protected]>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)