North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: 923Mbits/s across the ocean

  • From: Cottrell, Les
  • Date: Sat Mar 08 13:07:13 2003

I am not normally on this list but someone kindly gave me copies of some of the email concerning the Internet2 Land Speed record. So I have joined the list.

As one of the PIs of the record, I thought it might be useful to comment on a few interesting items I have seen, and no I am not trying to flame anybody:

"Give  em a million dollars, plus fiber from here to anywhere and let me muck with the TCP algorith, and I can move a GigE worth of traffic too - Dave"

You are modest in your budgetary request. Just the Cisco router (GSR 12406) we had on free loan listed at close to a million dollars, and the OC192 links just from Sunnyvale to Chicago would have cost what was left of the million/per month.

We used a stock TCP (Linux kernel TCP).  We did however, use jumbo frames (9000Byte MTUs).

In response Richard A Steenbergen we are not "now living in a tropical foreign country,  with lots and lots of drugs and women" but then the weather in California is great today.

"What am I missing here, theres OC48=2.4Gb, OC192=10Gb ..."

We were running host to host (end-to-end) with a single stream with common off the shelf equipment, there are not too many (I think none) > 1GE host NICs available today that are in production (e.g. without signing a non-disclosure agreement).

"Production commercial networks ... Blow away these speeds on a regular basis". 
See the above remark about end-to-end application to application, single stream.

"So, you turn down/off all the parts of TCP that allow you to share bandwidth ..." 
We did not mess with the TCP stack, it was stock off the shelf.

"... Mention that "Internet speed records" are measured in terabit-meters/sec." 
You are correct, this is important, but reporters want a sound bite and typically only focus on one thing at a time. I will make sure next time I talk to a reporter to emphasize this. Maybe we can get some mileage out of Petabmps (Peta bit metres per second)  sounds

"What kind of production environment needs a single TCP stream of data at 1Gbits/s over a 150ms latency link?" 
Today High Energy Particle Physics needs hundreds of Megabits/s between California and Europe (Lyon, Padova and Oxford) to deliver data on a timely basis form an experiment site at SLAC to regional computer sites in Europe. Today on production acadmeic networks (with sustainable rates of 100 to a few hundred Mbits/s) it takes about a day to transmit just over a Tbyte of data which just about keeps up with the data rates. The data generation rates are doubling / year so within 1-3 years we will be needing speeds like in the record on a production basis. We needed to ensure we can achieve the needed rates, and whether we can do it with off the shelf hardware, how the hosts and OS' need configuring, how to tune the TCP stack or how newer stacks perform, what are the requirements for jumbo frames etc. Besides High Energy Physics other sciences are beginning to grapple with how to repliacte large databases across the globe, such sciences include radio-astronmoy, human genome, global
weather, seismic ...

The spud gun is interesting, given the distances, probably a 747 freightliner packed with DST tapes or disks is a better idea.  Assuming we fill the 747 with say 50 Gbps tapes (disks would probably be better), then if it takes 10 hours to fly from San Francisco (BTW Sunnyvale is near San Francisco not near LA as one person talking about retiring to better weather might lead one to believe) the bandwidth is about 2-4 Tbits/s. However, this ignores the reality of labelling, writing the tapes, removing from silo robot, pocaking, getting to airport, loading, unloading, getting through customs etc. In reality the latency is really closer to 2 weeks. Even worse if there is an error (heads not aligned etc.) then the the retry latency is long and the effort involved considerable.  Also the network solution lends itself much better to automation, in our case we saved a couple of full time equivalent people at the sending site to distribute the data on a regular basis to our collaborator sites
in France, UK and Italy.

The remarks about window size and buffer are interesting also.  It is true large windows are needed. To approach 1Gbits/s we require 40MByte windows.  If this is going to be a problem, then we need to raise question like this soon and figure out how to address (add more memory, use other protocols etc.). In practice to approcah 2.5Gbits/s requires 120MByte windows.

I am quite happy to concede that this does not need to be about some jocks beating a record. I do think it is important to catch the public's attention to why high speeds are important, that they are achievable today application to application (it would also be useful to estimate when such speeds are available to universities, large companies, small companies, the home etc.), and for techies it is important to start to understand the challenges the high speeds raise, e.g. cpu and router memories, bugs in TCP, OS, application etc., new TCP stacks, new (possibly UDP based) protocols such as tsunami, need for 64 bit counters in monitoring, effects of the NIC card, jumbo requirements etc., and what is needed to address them. Also to try and put it in meaningful terms (such as 2 full length DVD movies in a minute, that could also increase the "cease and desist" legal messages shipped ;-)) is important. 

Hope that helps, and thanks to you guys in the NANOG for providing todays high speed networks.