North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Network end users to pull down 2 gigabytes a day, continuously?

  • From: Stephen Sprunk
  • Date: Sun Jan 21 13:54:35 2007


[ Note: please do not send MIME/HTML messages to mailing lists ]


Thus spake Alexander Harrowell
Good thinking. Where do I sign? Regarding your first point, it's really
surprising that existing P2P applications don't include topology
awareness. After all, the underlying TCP already has mechanisms
to perceive the relative nearness of a network entity - counting hops
or round-trip latency. Imagine a BT-like client that searches for
available torrents, and records the round-trip time to each host it
contacts. These it places in a lookup table and picks the fastest
responders to initiate the data transfer. Those are likely to be the
closest, if not in distance then topologically, and the ones with the
most bandwidth.

The BT algorithm favors peers with the best performance, not peers that are close. You can rail against this all you want, but expecting users to do anything other than local optimization is a losing proposition.


The key is tuning the network so that local optimization coincides with global optimization. As I said, I often get 10x the throughput with peers in Europe vs. peers in my own city. You don't like that? Well, rate-limit BT traffic at the ISP border and _don't_ rate-limit within the ISP. (s/ISP/POP/ if desired) Make the cheap bits fast and a the expensive bits slow, and clients will automatically select the cheapest path.

Further, imagine that it caches the search - so when you next seek
a file, it checks for it first on the hosts nearest to it in its "routing
table", stepping down progressively if it's not there. It's a form of
local-pref.

Experience shows that it's not necessary, though if it has a non-trivial positive effect I wouldn't be surprised if it shows up someday.


It's a nice idea to collect popularity data at the ISP level, because
the decision on what to load into the local torrent servers could be
automated.

Note that collecting popularity data could be done at the edges without forcing all tracker requests through a transparent proxy.


Once torrent X reaches a certain trigger level of popularity, the local
server grabs it and begins serving, and the local-pref function on the
clients finds it. Meanwhile, we drink coffee. However, it's a potential
DOS magnet - after all, P2P is really a botnet with a badge.

I don't see how. If you detect that N customers are downloading a torrent, then having the ISP's peer download that torrent and serve it to the customers means you consume 1/N upstream bandwidth. That's an anti-DOS :)


And the point of a topology-aware P2P client is that it seeks the
nearest host, so if you constrain it to the ISP local server only, you're
losing part of the point of P2P for no great saving in peering/transit.

That's why I don't like the idea of transparent proxies for P2P; you can get 90% of the effect with 10% of the evilness by setting up sane rate-limits.


As long as they don't interfere with the user's right to choose someone
else's content, fine.

If you're getting it from an STB, well, there may not be a way for users to add 3rd party torrents; how many users will be able to figure out how to add the torrent URLs (or know where to find said URLs) even if there is an option? Remember, we're talking about Joe Sixpack here, not techies.


You would, however, be able to pick whatever STB you wanted (unless ISPs deliberately blocked competitors' services).

S

Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking