North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: 10GE router resource

  • From: Andrew C Burnette
  • Date: Thu Mar 27 01:42:00 2008




William Herrin wrote:
On Wed, Mar 26, 2008 at 4:26 PM, Sargun Dhillon <[email protected]> wrote:
 from a viewpoint of hardware,
 x86 is a fairly decent platform. I can stuff 40 (4x10GigE multiplex with
 a switch) 1 GigE ports in it. Though, the way that Linux works, it
 cannot handle high packet rates.

Correction: The way DRAM works, it cannot handle high packet rates. Also note that the PCI-X bus tops out in the 7 to 8 gbps range and it's half-duplex.

Indeed. PCI-X is already an EOL'ed interface, if only cheap PCI-X cards were available. Once you add extensive ACL's, there's loads more [central] processing to be done than just packet routing (100k choices versus 2 to 4 interfaces). System throughput gets slammed rather quickly. Linux IPtables grumbles painfully at 100k line ACLs :) Not to mention the options of what to do with a packet are very limited.


The AMD chips with extra L1 cache perform better on *bsd platforms as the forwarding code is tight and likes to stay close to the CPU, and context switching kills packet processing performance (thus the small but notable increase in the multicore performance). The GP registers on the AMD platform are also easy to deal with (and in 64 bit mode, you get double the number for free) essentially working an end around a broken stack architecture from a few decades ago....anyone recall the simplicity of assembly language of the 6800 or the 6502? :-)

getting the latency down low enough for HPC clusters is a major hassle, as the x86 PC design just doesn't have the bandwidth.

Of course, Intel makes some slick NPU's for custom work (e.g. cloudshield.com). If you like starting at bit 0. (isn't that like slot zero or port zero, it technically doesn't exist since zero is only a placeholder in larger numbers if you mean anything greater than none? I could swear back in the days of a SLC96, ports were 1-96, not 0-95 :-) )

http://developer.intel.com/design/network/products/npfamily/index.htm?iid=ncdcnav2+proc_netproc

too bad they [Intel] don't make a hypertransport capable version, or you'd have one helluva multicore multiNPU system with no glue logic required.

Fun to play around though.

regards,
andy

High-rate routers try to keep the packets in an SRAM queue and instead of looking up destinations in a DRAM-based radix tree, they use a special memory device called a TCAM.

http://www.pagiamtzis.com/cam/camintro.html

Regards.
Bill Herrin