North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Keynote/Boardwatch Internet Backbone Index A better test!!!

  • From: Craig A. Huegen
  • Date: Tue Jul 01 12:49:00 1997

On 1 Jul 1997, Sean M. Doran wrote:

==>The reality, however, is that most American ISPs seem to
==>engage in knee-jerk denial or aggressive posturing
==>whenever there is a suggestion that their network is
==>anything but perfect.  I certainly have been in the middle
==>of that kind of thing, so I can hardly claim innocence.
==>
==>Such reactions are pure marketing: we can't admit that
==>maybe there is some way we could improve things or some
==>set of things our network doesn't do well because that
==>would hurt our product image.

I don't work for an NSP or ISP whose image can be hurt; my reactions were
based solely upon the flawed methodology used by the magazines.  For the
most part, my beef with the study was that it didn't measure *at*all* what
they claim to have measured.  If they had said "end-to-end performace to
various providers' web servers", great.  But it certainly does NOT measure
backbone performance.  But this point has been made enough, and I digress.

But it's one of the things we have to live with, I suppose.  Trade rags
aren't known to be completely un-biased and have accurate technical
content.

==>People who react with marketing-think and who build with
==>marketing-think in the first place deserve to be torn
==>apart by analyses based in marketing-think.  While the
==>article in question isn't in my hands yet, based on Mr
==>Rickard's and others' comments on the NANOG list, I think
==>it's safe to guess that this is precisely what the
==>Boardwatch study does.

I would have to disagree, Sean, and say that there are one of two paths
which could give a re-design of this study any merit (after the
methodologies are fixed):

1.  Stick with the backbone performance figure and install an equal-sized
circuit to each backbone.  Place a web server on it, no other traffic, and
download the *exact* *same* file on each web server from the original 27
locations.  Probably should up that number a bit more.

2.  Stick with the end-to-end performance, figure, change the study's
title, and make it a bit more scientific by asking the backbones to put
this file on their server for testing.  Sure, some backbones may decline,
and that can be published too.  Readers can draw their own conclusions as
to why a provider declines.

/cah