North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Calling all researchers (was Re: Vulnerbilities ofInterconnection)

  • From: sgorman1
  • Date: Sun Sep 15 17:16:19 2002

Thanks for the lead on research feedback session at NANOG - sounds very 
useful.  Guess I need to whip out an abstract pretty quickly.

> Due to policy constraints the commercial Internet more closely mimics
> a collection of decentralized networks.  

It is the collection of several competitive decentralized networks that 
has caused the collective Internet not to be a decentralized network, 
and more along the lines of what was discussed in the previous post. 

> But during a disaster, capacity is capacity.

No doubt there is lots of capacity and plenty of it is as you 
describe "shadow capacity".  Several studies have found backbone 
utilization to be as low as 10-15% on average.  An often asked question 
is not if there is extra capacity in case of disaster but the ability 
to provide multi-proivder cooperation to take advantage of it - 
assuming that the problem is alrge enough cannot be handled by 
providers individually.

> The missing piece from most of the previous papers I've
> read is how do you find out how much unused capacity exists? 

Yup - capacity is the missing piece, largely because the data is 
considered proprietary and cannot be obtained through non-invasive 
probing.  Running a path char takes aggggessssss and is not really 
practical.  Trying to put together a model based on advertised link 
capacity is sketchy at best, but might be useful for a best case 
scenario. 

> The operational community comes back out from 
> behindthe curtain and tells the research community our network 
> doesn't have
> that problem.  And if it did, its fixed now :-)

A nice analogy of the stats quo, but the problem is it is like going up 
on the Eiffel tower and trying to get a view of Paris by looking 
through a toliet paper tube.  You suffer from tunnel vision.  It would 
seem to get an accurate picture you need to look at the aggregate and 
not just individual networks.  Then you get the proprietary rub and you 
are back to square one.  Government types get edgy and point fingers 
and the providers say that there is not a problem.  There are several 
ways this can play out, and I would hope none of which would involve 
the big R word - regulation.  


----- Original Message -----
From: Sean Donelan <[email protected]>
Date: Sunday, September 15, 2002 1:19 am
Subject: Calling all researchers (was Re: Vulnerbilities of 
Interconnection)

> 
> On Sat, 14 Sep 2002 [email protected] wrote:
> > The point being from a structural stand point there is convincing
> > empirical evidence that the Internet is not a distributed network.
> 
> Due to policy constraints the commercial Internet more closely mimics
> a collection of decentralized networks.  However, things are more
> complicated because IP permits networks of any size to interconnect
> a almost any level over almost any communications technology. We 
> normallyconsider it an error when external traffic transits an 
> intranet or
> extranet.  But during a disaster, capacity is capacity.
> 
> > a small minority has the vast majority of conections.  The 
> applied side
> > of this I think I've posted about before, so please pardon the
> > redundancy, is that Internet as the AS and router level is very
> > resilent to random failures but highly susceptible to targeted
> > failures.  This has become so predominant in the literature it 
> has left
> > many folks asking questions along the lines of security.
> 
> This has come up on this list and others before.  I've previously
> commented on some of the published papers.  I'll try not to repeat
> everything.  The missing piece from most of the previous papers I've
> read is how do you find out how much unused capacity exists? BGP views
> and Skitter data is very good at finding used paths, but it tells you
> nothing about how much "shadow" capacity exists.
> 
> Generally, after a disaster you'll discover new capacity exists in
> the Internet which you couldn't detect before the disaster.  Where
> does that capacity come from, and why couldn't you detect it before
> the disaster? And how much capacity exists in the shadows?
> 
> > This has led a scramble to come up with new Internet topology
> > generators - another laundry list of references and approaches - but
> > the reported security implications have had a bumpier road.  Are all
> > these computer scientist and physicists wrong, I'm sure a good case
> > could be made against them, but it leaves several open questions.
> > There have been several good cases made for the resilence of the 
> statas> quo and good arguments that there could be problems.  The 
> question is
> > how much of it is political.  One side looking for problems they can
> > point fingers at in the name of homeland security and one side 
> denying> any problem at all because it is a lot cheaper if there 
> is no problem.
> 
> The research community comes up with a model how the Internet works.
> They hypothesize various outcomes based on that model.  The final 
> sectionof any research report is a list of problems requiring more 
> research :-)
> 
> The operational community looks at the model, goes behind the curtain
> and compares the model to their operational networks.  We find the 
> modeldoesn't match.  The operational community comes back out from 
> behindthe curtain and tells the research community our network 
> doesn't have
> that problem.  And if it did, its fixed now :-)
> 
> The research community asks can we please peek behind the curtain? 
> The
> operational community says we respectfully must decline.
> 
> Rinse. Repeat.
> 
> I probably won't make it to the next NANOG in Eugene, but on Sunday
> afternoon NANOG is hosting a research/operation forum to allow 
> researchersto solicit feedback from the operations community.  The 
> proposal deadline
> is Monday, September 16.
> 
> 
>