North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

RE: 95th Percentile = Lame

  • From: Greg A. Woods
  • Date: Sun Jun 03 23:30:24 2001

[ On Sunday, June 3, 2001 at 19:12:50 (-0700), James Thomason wrote: ]
> Subject: RE: 95th Percentile = Lame
>
> 
> The argument people seem to be making is that the cost of provisioning for
> this bursty traffic is the justification for average-based billing.  Yet
> 95th percentile billing assumes that in a worst-case scenario (for the
> carrier), 5 percent of bits passed are worthless. 

That's not it at all.

First off anyone speaking in general terms would never use an explicit
number when talking about rate-based billing.  That's something between
the customer and the ISP and specific to many technical details as well
as market forces.  I try to always the phrase "Nth percentile", for
example.  Some studies have even shown that the 75th percentile is a
closer approximation to CIR in some frame relay configurations (see, for
instance, the paper referred to last time this thread "occured" -- I no
longer have the URL, but it was "Kawaihiko and the Third-Quartile Day"
by Nevil Brownlee and Russell Fulton).

Secondly the basic assumption here is that the average ISP will have
more than one customer and that the usage profiles of two customers will
never be exactly the same from minute to minute.

> What is the basis for this assumption?
> 
> Facts for the service provider: 
> 
> 1. Hardware costs are fixed. 
> 2. Leased line costs are fixed.
> 3. "Bandwidth" (Peering or Transit) may be variable. 
> 
> Why do "peak bits" cost more than "regular bits" ?

Why do you have to ask a question to which answer is so obvious?!?!?!?  ;-)

It should be obvious that peak bits cost more than non-peak bits because
of basic supply and demand economics.  More people want more bits at
peak times and there are only so many to go around.  Making more of them
available is more costly (more expensive hardware, more expensive "last
mile" line costs, and indeed maybe even more premium bandwidth and
access charges).

If all of those things together were not the case then there would be no
such thing as a peak period in traffic volume!

> Why do I care?  Should not the cost of providing network access (at peak
> usage) be my -basis- of cost that is passed on to the customer?

Well, are you actually engineering your network to at least make a fair
attempt at handling the peak traffic volumes, or not?  If so then you're
right and you probably don't care (though your customers certainly will
because your prices may not be competitive because of this!).

> I disagree with you here.  However, I do not want to descend into a power
> discussion on-list. :)  I do not think there is any real evidence to
> substantiate the assumption that "peak power" costs more than "off peak
> power".  Instead, power companies would like you to move your electricity
> consumption to more convenient times well within acceptable profit
> margins. 

I'll fall for it because I think I have a very good example that may
provide a bit of an analogy.

Perhaps you should take a look at the history of Trans-Alta Utilities,
formerly owned by the City of Calgary, Alberta, and how it used its
surplus power (mostly hyrdo generated, IIRC) on non-peak periods.  Once
upon a time Calgary was one of the most brightly lit night-time spots of
any significant size on the surface of the globe (and may still be,
though it seemed a lot darker at night there the last time I visited).
Back in the early 1980's I seem to remember hearing things like "we have
more street lights per capita than any other city in the world" and so
on....  You could see the glow of the city from 60-90 miles away (and
that's not just because it sits on the prairies or has a huge dome of
pollution sitting over it! :-).  [Recently there's been a plan in place
to reduce the wattage of street lamps in Calgary because escalating
energy costs have doubled and the city's electric bill for residential
street lamps in Jan 2001 over the Jan 2000 bill!]

In Calgary's case they had to build plants capable of delivering
electricity at peak usage periods (with room to spare, of course, for
both anomalous demand as well as to cover unscheduled down-time).  Since
their peak usage was almost always during daylight hours, they had many
kilowatts of surplus power available all night long because there was
ample "free" water behind the dams to spin their generators.  Their
costs were entirely in building capacity and in devivery -- the energy
itself is/was more or less "free", at least at that large a scale.

So, if peak-power doesn't "cost more" than off-peak power then how come
the City of Calgary was able to burn so much non-peak power without
taxing its residents as much as would have been necessary if they didn't
own their own power plants and were instead forced to pay flat rate
charges all night long?

Certianly there are economies of scale, but they don't account entirely
for the reasons why electricity suppliers would like consumers to
"balance" the load more evenly across the day.  It really does cost more
to build more/bigger generators when the peak usage grows, and those
costs must be passed on.  If consumers could balance their demands to
create a flat-line usage rate then existing capacity would stretch much
much further thus amortising capital costs used to build that capacity
over a much longer period of time.  Whether or not these savings would
be passed on to the consumer depends no doubt on whether your power
company is government owned or not.

Even when the energy source isn't "free" there are additional costs to
having to engineer supply systems to feed whatever the input fuel is,
and in some cases the increased demand for fuel will increase its price
too!  Economies of scale can only go so far.  Is one super-sized nuclear
plant "cheaper" than 10,000 evenly distributed slowpoke reactors?  What
about in the long term when the slowpoke can "burn" all the "spent" fuel
from the old-fashioned reactors?  What about from a safety perspective?

Indeed many large electricity consumers in various parts of the world do
in fact have to pay different rates at different times of day, and
they're forced to do so because their suppliers face increased costs if
their peak usage can't be kept under control.

Almost anyone in North America at least can tell you there are periods
of peak usage where sometimes hard to get a packet through edge-wise,
and other times when you can spew extra packets all over the place and
hardly notice any delays or conjestion.  Clearly this would tend to
indicate that the Internet (here at least) has been reasonbly well
engineered to cover the peak usage and thus is drastically over-
engineered from the point of view of anyone who doesn't need to use it
during peak periods.

-- 
							Greg A. Woods

+1 416 218-0098      VE3TCP      <[email protected]>     <[email protected]>
Planix, Inc. <[email protected]>;   Secrets of the Weird <[email protected]>