North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Notes from the October NANOG meeting

  • From: Curtis Villamizar
  • Date: Wed Oct 26 19:57:32 1994

In message <[email protected][192.136.150.3]>, Stan Barber writes:
> Here are my notes from the recent NANOG meeting. Please note that any
> mistakes are mine. Corrections, providing missing information, or futher
> exposition of
> any of the information here will be gratefully accepted and added to this
> document which will be available via anonymous ftp later this month.

Stan,

Thanks for taking notes.  It is a difficult job.  There are some major
ommisions.

Here are just a few that I think are important in that they deal with
what might be a critical issue.  I left the Cc on, in case someone
remembers differently.

Curtis



Ameritec presented testing results of tests using TCP on their NAP.
(Please get details from Ameritec).  Ameritec claims that their switch
performed well.  It was pointed out that with no delay in the tests,
the delay bandwidth product of the TCP flows was near zero and it was
asserted (by me actually) that results from such testing is not useful
since real TCP flows going through a NAP have considerable delays.

> PacBell NAP Status--Frank Liu
> 
> [ ... ]
> 
> Testing done by Bellcore and PB.
> [TTCP was used for testing. The data was put up and removed quickly, so I
> did lose some in taking notes.]
> [ ... ]
> 
> Conclusions
> 
> Maximum througput is 33.6 Mbps for the 1:1 connection.
> 
> Maximum througput will be higher when the DSU HSSI clock and data-rate
> mistmatch is corrected.
> 
> Cell loss rate is low (.02% -- .69%).
> 
> Througput degraded with the TCP window size is greater than 13000 bytes.
> 
> Large switch buffers and router traffic shaping are needed.
> 
> [The results appear to show TCP backing-off strategy engaging.]

Again, no delay was added.  Measured delay (ping time) was said to be
3 msec (presumably due to switching or slow host response).  Again -
It was pointed out that with no delay in the tests, the delay
bandwidth product of the TCP flows was near zero and asserted that
results from such testing is not useful.

> ANS on performance --- Curtis Vallamizar
> There are two problems: aggregation of lower-speed TCP flows, support for
> high speed elastic supercomputer application.
> 
> RFC 1191 is very important as is RFC-1323 for these problems to be addressed.
> 
> The work that was done -- previous work showed that top speed for TCP was 30M
> bs.
> 
> The new work -- TCP Single Flow, TCP Multiple Flow
> 
> Environment -- two different DS3 paths  (NY->MICH: 20msec; NY->TEXAS->MICH:
> 68msec), four different versions of the RS6000 router software and Indy/SCs
> 
> Conditions -- Two backround conditions (no backround traffic, reverse TCP
> flow intended to achive 70-80% utilization)
> Differing numbers of TCP flows.
> 
> Results are available on-line via http.  Temporarily it is located at:
> 
> http://tweedledee.ans.net:8001:/
> 
> It will be on line rrdb.merit.edu more permanently.

The difficulty in carrying TCP traffic is proportional to the delay
bandwidth product of the traffic, not just the bandwidth.  Adding
delay makes the potential for bursts sustained over a longer period.
Real networks have delay.  US cross continent delay is 70 msec.  

ANSNET results were given using improved software which improved
buffer capacity, intentionally crippled software (artificially limited
buffering), and software which included Random Early Detection (RED -
described in a IEEE TON paper by Floyd and Jacobson).  Sustained
goodput rates of up to 40-41 Mb/s were acheived using ttcp and 1-8 TCP
flows.  Some pathelogical cases were demonstrated in which much worse
performace was acheived.  These case mostly involve too little
buffering at the congestion point (intentionally crippled router code
was used to demostrate this) or using a single TCP flow and setting
the TCP window much too large (3-5 times the delay bandwidth product).
The latter pathelogic case can be avoided if the routers implement
RED.  The conclusions were: 1) routers need buffer capacity as large
as the delay bandwidth product and 2) routers should impement RED.

Only a 20 msec delay was added to the prototype NAP testing.  Results
with the prototype NAP and 20 msec delay were very poor compared to
the performance of unchannelized DS3.  Prototype NAP testing results
were poor compared to Ameritec and Pacbell results due to the more
realistic delay bandwidth product.  Worse results can be expected with
a 70 msec delay and may be better indications of actual performance
when forwarding real traffic.  More testing is needed after fixes to
ADSUs are applied.  A sufficient bottleneck can not be created at the
switch until ADSU problems are addressed.

There was some discussion (at various times during the presentations)
of what this all means for the NAPs.  If I may summarize - On the
positive side the Ameritec switch has more buffering than the Fore
used in the Bellcore prototype NAP.  On the negative side, Ameritec
didn't include any delay in their testing.  NAP testing results (both
positive results from Amertic, mixed results from PacBel and negative
results from ANS) are inconclusive so far.

Curtis

BTW - I can't see how anyone could have kept accurate notes since a
lot was presented.  Maybe it would be better to just collect the
slides from the presenters.  If you want an ascii version on mine try
http://tweedledee.ans.net:8001/nap-testing/tcp-performance.html and
save-as plain text.  You'll get the slides minus the plots of results
but with some additional annotation.
- - - - - - - - - - - - - - - - -