North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: "Does TCP Need an Overhaul?" (internetevolution, via slashdot)

  • From: Iljitsch van Beijnum
  • Date: Mon Apr 07 12:55:50 2008


On 7 apr 2008, at 16:20, Kevin Day wrote:


As a quick example on two FreeBSD 7.0 boxes attached directly over GigE, with New Reno, fast retransmit/recovery, and 256K window sizes, with an intermediary router simulating packet loss. A single HTTP TCP session going from a server to client.

Ok, assuming a 1460 MSS that leaves the RTT as the unknown.


SACK enabled, 0% packet loss: 780Mbps
SACK disabled, 0% packet loss: 780Mbps

Is that all? Try with jumboframes.


SACK enabled, 0.005% packet loss: 734Mbps
SACK disabled, 0.005% packet loss: 144Mbps (19.6% the speed of having SACK enabled)

144 Mbps and 0.00005 packet loss probability would result in a ~ 110 ms RTT so obviously something isn't right with that case.


734 would be an RTT of around 2 ms, which sounds fairly reasonable.

I'd be interested to see what's really going on here, I suspect that the packet loss isn't sufficiently random so multiple segments are lost from a single window. Or maybe disabling SACK also disables fast retransmit? I'll be happy to look at a tcpdump for the 144 Mbps case.

It would be very nice if more network-friendly protocols were in use, but with "download optimizers" for Windows that cranks the TCP window sizes way up, the general move to solving latency by opening more sockets, and P2P doing whatever it can to evade ISP detection - it's probably a bit late.

Don't forget that the user is only partially in control, the data also has to come from somewhere. Service operators have little incentive to break the network. And users would probably actually like it if their p2p was less aggressive, that way you can keep it running when you do other stuff without jumping through traffic limiting hoops.