North American Network Operators Group Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical Re: Can P2P applications learn to play fair on networks?
On Sun, 28 Oct 2007, Mikael Abrahamsson wrote: Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
The university network engineers are saying adding capacity alone isn't solving their problems. Since I know people that offer 100/100 to residential users that upstream this with GE/10GE in their networks and they are happy with it, I don't agree with you about the problem description.
For statistical overbooking to work, a good rule of thumb is that the upstream can never be more than half full normally, and each customer cannot have more access speed than 1/10 of the speed of the upstream capacity.
If you restrict demand, statistical multiplexing works. The problem is how do you restrict demand? What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? What happens when P2P become popular and 30% of your subscribers use P2P? What happens when 80% of your subscribers use P2P? What happens with 100% of your subscribers use P2P? TCP "friendly" flows voluntarily restrict demand by backing off when they detect congestion. The problem is TCP assumes single flows, not grouped flows used by some applications.
|