North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Can P2P applications learn to play fair on networks?

  • From: Perry Lorier
  • Date: Mon Oct 22 08:16:03 2007




Will P2P applications really never learn to play nicely on the network?



So from an operations perspective, how should P2P protocols be designed?


There appears that the current solution at the moment is for ISP's to put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P clients are trying harder and harder to hide to work around these barriers.

Would having a way to proxy p2p downloads via an ISP proxy be used by ISPs and not abused as an additional way to shutdown and limit p2p usage? If so how would clients discover these proxies or should they be manually configured?

Would stronger topological sharing be beneficial? If so, how do you suggest end users software get access to the information required to make these decisions in an informed manner? Should p2p clients be participating in some kind of weird IGP? Should they participate in BGP? How can the p2p software understand your TE decisions? At the moment p2p clients upload to a limited number of people, every so often they discard the slowest person and choose someone else. This in theory means that they avoid slow/congested paths for faster ones. Another easy metric they can probably get at is RTT, is RTT a good metric of where operators want traffic to flow? p2p clients can also perhaps do similarity matches based on the remote IP and try and choose people with similar IPs, presumably that isn't going to work well for many people, would it be enough to help significantly? What else should clients be using as metrics for selecting their peers that works in an ISP friendly manner?

If p2p clients started using multicast to stream pieces out to peers, would ISP's make sure that multicast worked (at least within their AS?). Would this save enough bandwidth for ISP's to care? Can enough ISP's make use of multicast or would it end up with them hauling the same data multiple times across their network anyway? Are there any other obvious ways of getting the bits to the user without them passing needlessly across the ISP's network several times (often in alternating directions)?

Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits to state that this is bulk transfers? Would ISP's use these sensibly or will they just use these hints to add additional barriers into the network?

Should p2p clients avoid TCP entirely because of it 's "fairness between flows" and try and implement their own congestion control algorithms on top of UDP that attempt to treat all p2p connections as one single "congestion entity"? What happens if this is buggy on the first implementation?

Should p2p clients be attempting to mark all their packets as coming from a single application so that ISP's can QoS them as one single entity (eg by setting the IPv6 flowid to the same value for all p2p flows)?

What incentive can the ISP provide the end user doing this to keep them from just turning these features off and going back to the current way things are done?

Software is easy to fix, and thanks to automatic updates of much p2p network can see a global improvement very quickly.

So what other ideas do operations people have for how these things could be fixed from the p2p software point of view?