North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Network end users to pull down 2 gigabytes a day, continuously?

  • From: Colm MacCarthaigh
  • Date: Sat Jan 06 09:10:41 2007

On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
> At 01:52 AM 1/6/2007, Thomas Leavitt <[email protected]> wrote:
> >If this application takes off, I have to presume that everyone's 
> >baseline network usage metrics can be tossed out the window...

That's a strong possibility :-) 

I'm currently the network person for The Venice Project, and busy
building out our network, but also involved in the design and planning
work and a bunch of other things. 

I'll try and answer any questions I can, I may be a little restricted in
revealing details of forthcoming developments and so on, so please
forgive me if there's later something I can't answer, but for now I'll
try and answer any of the technicalities. Our philosophy is to pretty
open about how we work and what we do. 

We're actually working on more general purpose explanations of all this,
which we'll be putting on-line soon. I'm not from our PR dept, or a
spokesperson, just a long-time NANOG reader and ocasional poster
answering technical stuff here, so please don't just post the archive
link to digg/slashdot or whatever. 

The Venice Project will affect network operators and we're working on a
range of different things which may help out there.  We've designed our
traffic to be easily categorisable (I wish we could mark a DSCP, but the
levels of access needed on some platforms are just too restrictive) and
we know how the real internet works. Already we have aggregate per-AS
usage statistics, and have some primitive network proximity clustering.
AS-level clustering is planned.

This will reduce transit costs, but there's not much we can do for other
infrastructural, L2 or last-mile costs. We're L3 and above only.
Additionally, we predict a healthy chunk of usage will go to our "Long
tail servers", which are explained a bit here;

	http://www.vipeers.com/vipeers/2007/01/venice_project_.html

and in the next 6 months or so, we hope to turn up at IX's and arrange
private peerings to defray the transit cost of that traffic too. 
Right now, our main transit provider is BT (AS5400) who are at some
well-known IX's.

> Interesting. Why does it send so much data? 

It's full-screen TV-quality video :-) After adding all the overhead for
p2p protocol and stream resilience we still only use a maximum of 320MB
per viewing hour. 

The more popular the content is, the more sources it can be pulled from
and the less redundant data we send, and that number can be as low as
220MB per hour viewed. (Actually, I find this a tough thing to explain
to people in general; it's really counterintuitive to see that more
peers == less bandwidth - I'm still searching for a useful user-facing
metaphor, anyone got any ideas?).

To put that in context; a 45 minute episode grabbed from a file-sharing
network will generally eat 350MB on-disk, obviously slightly more is
used after you account for even the 2% TCP/IP overhead and p2p protocol
headers. And it will usually take longer than 45 minutes to get there.

Compressed digital telivision works out at between 900MB and 3GB an hour
viewed (raw is in the tens of gigabytes). DVD is of the same order.
YouTube works out at about 80MB to 230MB per-hour, for a mini-screen
(though I'm open to correction on that, I've just multiplied the
bitrates out).

> Is it a peer to peer type of system where it redistributes a portion
> of the stream as you are viewing it to other users?

Yes, though not neccessarily as you are viewing it. A proportion of what
you have viewed previously is cached and can be made available to other
peers.

-- 
Colm MacCárthaigh                        Public Key: [email protected]