North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Tail Drops and TCP Slow Start

  • From: Murphy, Brennan
  • Date: Fri Dec 07 12:14:19 2001

If I have a DS3 or OC3 handling mounds and mounds of FTP download traffic,
what is the easiest way to detect if the bandwidth in use is falling into
a classic Tail Drop pattern?  According to a Cisco book I am reading, the
bandwidth utilization should graph in a "sawtooth" pattern of gradual
increases in accordance with multiple machines gradually increasing
via TCP slow start and then sharp drops. Will this only happen when
the utilization approaches 100%. (maybe dumb question)

Should I be able to do a show buffers and see misses or is there some
better way to detect other than via graphing?  

Also, suppose in examining my ftp traffic patterns that I noticed that it
spikes at 15minutes after the type of the hour, consistently, etc.
Could I create a timed access list to only kick in at that time?  
Anyone have experience with WRED to handle ftp congestion?

I usually take these types of questions to Cisco but I thought I'd post
it to this list to get any generic real world advice. 



















sh buff
Buffer elements:
     499 in free list (500 max allowed)
     5713661 hits, 0 misses, 0 created

Public buffer pools:
Small buffers, 104 bytes (total 600, permanent 600):
     580 in free list (20 min, 1250 max allowed)
     2225528470 hits, 6 misses, 18 trims, 18 created
     0 failures (0 no memory)
Middle buffers, 600 bytes (total 450, permanent 450):
     448 in free list (10 min, 1000 max allowed)
     68259213 hits, 7 misses, 21 trims, 21 created
     0 failures (0 no memory)
Big buffers, 1524 bytes (total 450, permanent 450):
     449 in free list (5 min, 1500 max allowed)
     6807747 hits, 0 misses, 0 trims, 0 created
     0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 50, permanent 50):
     50 in free list (0 min, 1500 max allowed)
     46167681 hits, 0 misses, 0 trims, 0 created
     0 failures (0 no memory)
Large buffers, 5024 bytes (total 50, permanent 50):
     50 in free list (0 min, 150 max allowed)
     0 hits, 0 misses, 0 trims, 0 created
     0 failures (0 no memory)
Huge buffers, 18024 bytes (total 5, permanent 5):
     5 in free list (0 min, 65 max allowed)
     34 hits, 6 misses, 12 trims, 12 created
     0 failures (0 no memory)

Interface buffer pools:
IPC buffers, 4096 bytes (total 768, permanent 768):
     768 in free list (256 min, 2560 max allowed)
     769236774 hits, 0 fallbacks, 0 trims, 0 created
     0 failures (0 no memory)

Header pools: