North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: latency (was: RE: cooling door)

  • From: Mikael Abrahamsson
  • Date: Sun Mar 30 04:25:51 2008


On Sat, 29 Mar 2008, Frank Coluccio wrote:


Understandably, some applications fall into a class that requires very-short
distances for the reasons you cite, although I'm still not comfortable with the
setup you've outlined. Why, for example, are you showing two Ethernet switches
for the fiber option (which would naturally double the switch-induced latency),
but only a single switch for the UTP option?

Yes, I am showing a case where you have switches in each rack so each rack is uplinked with a fiber to a central aggregation switch, as opposed to having a lot of UTP from the rack directly into the aggregation switch.


Now, I'm comfortable in ceding this point. I should have made allowances for this
type of exception in my introductory post, but didn't, as I also omitted mention
of other considerations for the sake of brevity. For what it's worth, propagation
over copper is faster propagation over fiber, as copper has a higher nominal
velocity of propagation (NVP) rating than does fiber, but not significantly
greater to cause the difference you've cited.

The 2/3 speed of light in fiber as opposed to propagation speed in copper was not in my mind.


As an aside, the manner in which o-e-o and e-o-e conversions take place when
transitioning from electronic to optical states, and back, affects latency
differently across differing link assembly approaches used. In cases where 10Gbps

My opinion is that the major factors of added end-to-end latency in my example is that the packet has to be serialisted three times as opposed to once and there are three lookups instead of one. Lookups take time, putting the packet on the wire take time.


Back in the 10 megabit/s days, there were switches that did cut-through, ie if the output port was not being used the instant the packet came in, it could start to send out the packet on the outgoing port before it was completely taken in on the incoming port (when the header was received, the forwarding decision was taken and the equipment would start to send the packet out before it was completely received from the input port).

By chance, is the "deserialization" you cited earlier, perhaps related to this
inverse muxing process? If so, then that would explain the disconnect, and if it
is so, then one shouldn't despair, because there is a direct path to avoiding this.

No, it's the store-and-forward architecture used in all modern equipment (that I know of). A packet has to be completely taken in over the wire into a buffer, a lookup has to be done as to where this packet should be put out, it needs to be sent over a bus or fabric, and then it has to be clocked out on the outgoing port from another buffer. This adds latency in each switch hop on the way.


As Adrian Chadd mentioned in the email sent after yours, this can of course be handled by modifying or creating new protocols that handle this fact. It's just that with what is available today, this is a problem. Each directory listing or file access takes a bit longer over NFS with added latency, and this reduces performance in current protocols.

Programmers who do client/server applications are starting to notice this and I know of companies that put latency-inducing applications in the development servers so that the programmer is exposed to the same conditions in the development environment as in the real world. This means for some that they have to write more advanced SQL queries to get everything done in a single query instead of asking multiple and changing the queries depending on what the first query result was.

Also, protocols such as SMB and NFS that use message blocks over TCP have to be abandonded and replaced with real streaming protocols and large window sizes. Xmodem wasn't a good idea back then, it's not a good idea now (even though the blocks now are larger than the 128 bytes of 20-30 years ago).

--
Mikael Abrahamsson    email: [email protected]