North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Real Media and M-Bone feeds

  • From: Sam Thomas
  • Date: Tue Oct 05 17:32:35 1999

Disclaimer: I think I know what I'm talking about, caveat emptor. ;-)

On Tue, Oct 05, 1999 at 12:40:22PM -0700, Vadim Antonov wrote:
> 
> The problem is, everyone thinks that million lemmings can't be wrong.

I am not foolish enough to argue with a million lemmings. One need only
convince the one in front to change direction.

> Multicasting cannot be made scalable.  It is as simple as that.
> One can play with multicast routing protocols as much as one
> wishes - in pilots and small networks.  It only takes a question -
> "ok, how are we going to handle 100000 multicast channels?" to
> realize that L3 multicasting is not going anywhere as a real-world
> production service.

That a more scalable solution has not yet been developed is not evidence
that one does not exist. Classful routing and address assignement wasn't
scalable, either, but we have (hopefully) come out of that era with some
cleverness and reliance on good design. You ask "ok, how are we going to
handle 100000 multicast channels?" rhetorically, assuming that there is
no answer to the query. Multicast is, in all reality, in its infancy. There
really aren't enough people using it to make scalability an urgent issue.
In order for cool ideas like internet broadcast television and radio
services work, the internet will require multicasting or ma-bell is going
to have to create cheap bandwidth of gargantuan proportions. I'm not holding
my breath for the telephone companies to do anything earth-shattering in the
next decade. <cliche>Necessity is the mother of invention.</cliche>

> Nah.  It is a clear and present case of not thinking hard enough.

Indeed, and you're willing to dismiss multicast without second thought?

> > Worse yet, it distracts from deployment of the real solution - cacheing.

One needs pay close attention to the problem trying to be solved. I see it
as being 2 cases:

1. Broadcast "live" or "real-time" data. This is what multicast is (should
   be) really good at. Videoconferences with friendly geeks via some caching
   mechanism would be awful at best, and still require more than one feed
   from the source, or from some replication server (a la CUSeeMe).

2. "On-demand" data, such as your friendly neighborhood internet-movie
   rental center. Don't laugh, I expect to see it in my lifetime. These
   could be cached "close to home", assuming that there weren't some legal
   issues with intercepting and storing data someone else paid for. Caching
   is only good for asynchronous data likely to be requested numerous times
   from various sources. i.e. I want to watch the same movie my neighbor
   is, but I want to see it from the start, not pick up in the middle where
   he is.

> > Multicasting is faster than disk.
> 
> This is a rather strange statement.  I worked on a product (which was
> shipped) which delivered something like 20Gb/s of streaming video content
> from disk drives.   RAID can be very fast :)

In case 1 above, as I stated, cache would not work well for several people
in disparate locations trying to videoconference. How many 20Gb/s streams can
you feed out simultaneously from your box? Say 100 people had the ability
to view data streams running at that speed. Now, consider that they all have
22Gb/s connections to your network. With multicast, you can feed all of them
simultaneously from a single 22Gb/s connection to your box of streaming
data. This would require 2.2Tb/s if done on unicast from your box. And it
would still require the same 22Gb/s incoming stream. Please explain to me
how the cache saves bandwidth in this case.

> > I'm not sure how caching
> > is the solution.  Distributed content is also good.
> 
> Ah, distributed content :)  Yet another kludge requiring lots of maintenance
> and "special" relatioships between content providers and ISPs.

As time passes, I can assure you that the line between the two will blur,
whether because of multicast, caching, or greed is yet to be seen.

Some suggested goals for multicast design*:

. Ensure that data are replicated as close to the destination as possible.
. Ensure that multicast routers not carry more topology data than are
  absolutely necessary.
. Ensure that the multicast system does not lend itself to DoS abuse, as
  other methods of one-to-many data replication do.

* I am a multicast newbie, and largely illiterate in current implementation,
so don't laugh at my suggestions publicly, please. :-)

-- 
Those who do not understand Unix are condemned to reinvent it, poorly.
                -- Henry Spencer