North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Reducing Usenet Bandwidth

  • From: Stephen Stuart
  • Date: Tue Feb 12 05:01:52 2002

> Think about it. I post a reply to a question in a newgroup. The more 
> intelligent and interesting it is, and the more my reputation makes people 
> want to read my interesting comments, the more I pay. Does that make any 
> sense?

You're stuck thinking about users again. This is between sites, as I
thought I explained previously. This is about my spool having less
trash.

If your post is not a pirated copy of Word, and if you, as a user, can
be intelligent and interesting and enhance your reputation in less
than, say, 20 KB per post (you seem to be doing fine in less than 2KB,
so no worries there), then I don't think the wonderful world of USENET
should change for you. Not one bit.

On the other hand, I want the site that accepts postings from you to
incur higher costs if you or your site-mates inject pirated copies of
Word that take up space in my spool and eat up my bandwidth when *my*
site's set of users and downstream feeds have no interest in that
(apparently we lack "human nature").

On a private thread that cc'd you, I said this:

    If my server pre-fetched only the articles in the groups that its
    users were known to read, on the whole it would have much fewer large
    binaries than visit its spool every day. Behold, less trash.

    Yes, there would be completeness in rec.humor.funny and comp.mail.mh,
    and I would want those articles distributed in the manner they are now
    - whether by having articles pushed at me or by pre-fetching because I
    know there are consumers, I don't care which. The groups that *nobody*
    on my server reads would not consume content for articles, only for
    pointers. All the content would still be out there somewhere for some
    time, at the price of higher latency. If I had an additional policy
    knob that avoided pre-fetching content from servers that carried
    trash, so much the better.

"Pre-fetching" is loosely equivalent to "accept flooded articles" to
me; I don't make a fine point of only spooling articles that I know
will be read, I just don't want to spool today's five pirated copies
of Word. Neither do I want to prevent USENET from being used to
distribute larger stuff, though, so what to do?

The idea is to make the cost of injecting trash high. If your site
doesn't tend to inject lots of things that other sites tend not to
want to carry in their spool (pirated copies of Word, MP3s, and
pornography are the current set of examples), you - as a site - would
be rewarded by not having many unicast hits. If you do, you'd be
"denied" the ability to poison other people's spools. The articles you
have to offer would still be available as "pointers," in the event
that one of your site's many users contributed something other than
one of the items noted previously. Reducing the number of pirated
copies of Word offered by your site would be rewarded by a reduction
in the amount of unicast hits as more sites became willing to accept
articles from you.

Note that I consider unicast transfers of articles to take place
between spools; when a reader asks for an article that is only
resident as a pointer, the reader's spool would go get it and, in
theory, cache it for the next reader who happened to want it. Probably
some unwanted copies of Word would end up in spools this way, but at
least it would be as a result of some user asking and not just because
ten other sites decides my spool should have as many copies of Word as
possible. Number of unicast transfers does not equal number of readers
of an article; hopefully you'll see some increasing distance between
what I'm saying and the belief that it would somehow change USENET for
you.

When presented with the notion that no cost savings would result
(again, on the private thread that cc'd you), because "users" are
going to fetch every article in large binary groups because of what is
loosely termed "human nature," then I said:

    Yes, we agree on that. If the fact is that that particular aspect of
    human nature means we can't prevent reposting it all every week,
    though, then this really is an academic discussion. We can make the
    system more complex for the sake of shaving a few sharp edges off, but
    the real shape of USENET isn't going to change.

There might be some benefit from an academic perspective in an
implemention that would allow a news admin to set a knob so that
articles below a certain size got flooded as currently happens, while
articles above got "pointerized" (headers plus overview records,
perhaps) and thus only fetched if actually desired by a downstream
site.

The notion is not to replace USENET with the web publishing model (we
already have the web for that, and "web forums" and their ilk haven't
exactly rendered USENET obsolete); if anything it's to augment it with
some capabilities from that model that are there for site admins to
use if they choose to do so. If that turns into costs that site admins
want to recoup in the form of charges for their users, that's left as
an exercise for the implementors (remembering my opinion stated above
that I don't think USENET should change for you).

Stephen