North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Keynote/Boardwatch Internet Backbone Index A better test!!!

  • From: Craig A. Huegen
  • Date: Fri Jun 27 17:47:17 1997

On Fri, 27 Jun 1997, Jack Rickard wrote:

==>This assumes that you consider web server location and web server
==>performance to NOT be a part of overall network performance.  Our view

But insofar as the article goes, the stated intent was to measure BACKBONE
PERFORMANCE, not backbone web server performance.

There is a HUGE difference between the two.

==> I would hope so.   Can we break it down into what is purely web server
==>hardware performance, what is web server software performance, what is NIC
==>card on the web server, what is the impact of the first router the web
==>server is connected to, what is the impact of hub design and the interface
==>between IP routing and ATM switching, what part is the impact of
==>interconnections with other networks, what part is peering, what part is
==>just goofy router games?  Uh,,, NO we can't.  

You *can*, however, come up with a better methodology to attach to
your stated intent behind the study; or, if you care to leave your
methodology, clear up the misconceptions that your readers will take in.

==>results should factor to zero relatively.  They didn't.  They didn't to a
==>shocking degree.  And at this point I am under the broad assumption that
==>server performance doesn't account for all of it, perhaps little of it. 
==>But I could be widely wrong on the entire initial assumption.

I would challenge that assumption that it accounts for little.  The
machine the web server is running on, combined with the OS, load average,
and even down to the web server software, probably makes up a very good
portion of any delays you may have seen.

How many times do you go to a web site and see "Host contacted... 
awaiting response"?  When you see that, you have made the network
connection and have given your query.  Any time you see that at the
bottom, it's usually indicative of web server delay.  (There is a
possibility of packet loss in the initial sent query, but I'd venture to
state that it's a very small percentage of queries made to web servers). 

==>In any event, the networks have total control and responsibility for their
==>own web servers, much as they do for their own network if you define that
==>as something separate from their networks.  We measured web page downloads
==>from an end user perspective, and those are the results in aggregate.  If
==>it leads to a flurry of web server upgrades, and that moves the numbers,
==>we'll know more than we did.  If it leads to a flurry of web server
==>upgrades, and it FAILS to move the numbers, that will tell us something as
==>well.  

But again, if I were in the business to provide nationwide network service
for my customers, and provided my web site as a marketing tool (like most
companies out there), I would architect my network so that the customer
comes first.  The web site could be used for information about the
business, but isn't A-1 critical to operation of the network.  I'd side
with the priori.net folks here in their architecture; that the web server
really shouldn't be put into a pop. 

==>Our broad theory is that nothing is going to improve as long as anything
==>you do doesn't count and is not detectable by anyone anywhere.  If a
==>particular network can move their results in any fashion, that is an
==>improvement in the end user experience, however achieved.  

But the results you publish don't match the study's intent.

/cah