North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Thoughts on increasing MTUs on the internet

  • From: Iljitsch van Beijnum
  • Date: Thu Apr 12 15:24:19 2007


On 12-apr-2007, at 20:15, Randy Bush wrote:


A few years ago, the IETF was considering various jumbogram options.
As best I recall, that was the official response from the relevant
IEEE folks: "no". They're concerned with backward compatibility.

worse.  they felt that the ether checksum is good at 1500 and not
so good at 4k etc.  they *really* did not want to do jumbo.  i
worked that doc.

It looks to me that the checksum issue is highly exaggerated or even completely wrong (as in the 1500 / 4k claim above). From http:// www.aarnet.edu.au/engineering/networkdesign/mtu/size.html :


---
The ethernet packet also contains a Frame Check Sequence, which is a 32-bit CRC of the frame. The weakening of this frame check which greater frame sizes is explored in R. Jain's "Error Characteristics of Fiber Distributed Data Interface (FDDI)", which appeared in IEEE Transactions on Communications, August 1990. Table VII shows a table of Hamming Distance versus frame size. Unfortunately, the CRC for frames greater than 11445 bytes only has a minimum Hamming Distance of 3. The implication being that the CRC will only detect one-bit and two-bit errors (and not non-burst 3-bit or 4-bit errors). The CRC for between 375 and 11543 bytes has a minimum Hamming Distance of 4, implying that all 1-bit, 2-bit and 3-bit errors are detected and most non-burst 4-bit errors are detected.


The paper has two implications. Firstly, the power of ethernet's Frame Check Sequence is the major limitation on increasing the ethernet MTU beyond 11444 bytes. Secondly, frame sizes under 11445 bytes are as well protected by ethernet's Frame Check Sequence as frame sizes under 1518 bytes.

---



Is the FCS supposed to provide guaranteed protection against a certain number of bit errors per packet? I don't believe that's the case. With random bit errors, there's still only a risk of not detecting an error in the order of 1 : 2^32, regardless of the length of the packet. But even *any* effective weakening of the FCS caused by an increased packet size is considered unacceptable, it's still possible to do 11543 byte packets without changing the FCS algorithm.



Also, I don't see fundamental problem in changing the FCS for a new 802.3 standard, as switches can strip off a 64-bit FCS and add a 32- bit one as required.