North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Can P2P applications learn to play fair on networks?

  • From: Sam Stickland
  • Date: Mon Oct 22 09:23:09 2007


Sean Donelan wrote:

Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.


If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a
single network, maybe they are evil. But when many different networks
all start responding, then maybe something else is the problem.


The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem
to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.


What exactly is it that P2P applications do that is impolite? AFAIK they are mostly TCP based, so it can't be that they don't have any congestion avoidance, it's just that they utilise multiple TCP flows? Or it is the view that the need for TCP congestion avoidance to kick in is bad in itself (i.e. raw bandwidth consumption)?

It seems to me that the problem is more general than just P2P applications, and there are two possible solutions:

1) Some kind of magical quality is given to the network to allow it to do congestion avoidance on an IP basis, rather than on a TCP flow basis. As previously discussed on nanog there are many problems with this approach, not least the fact the core ends up tracking a lot of flow information.

2) A QoS scavenger class is implemented so that users get a guaranteed minimum, with everything above this marked to be dropped first in the event of congestion. Of course, the QoS markings aren't carried inter-provider, but I assume that most of the congestion this thread talks about is occuring the first AS?

Sam