North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Yahoo! Lessons Learned

  • From: Alex Rubenstein
  • Date: Thu Feb 10 19:18:38 2000

It's usually a little out of my league to reply to these types of emails,
but there is a way that seems (to me, at least) one way of attacking it.

Agreed, traffic flows are increasing, and are going to increase in an
exponetial and alarming rate, which will most likely be correlative with
the amount and relative significance of attackes (such as smurf or
stream.c). It can also be argued that at some point, the speed
capabilities of the core router scheme will not be able to keep up with
currently technology. It's my opinion that at some point a 'routed' core
will no longer be feasible, and it will be switched -- using what
mechanism, I don't know (ATM? MPLS?).

However, the edge/aggregation point of the network will still have
reasonable and manageable traffic flows. This is where IMHO the
monitoring, netflow, etc., will/should be done, as something can actually
be done about it in regards to CPU, etc.

Just some ramblings.




> 
> On 10 Feb 2000, Sean Donelan wrote:
> 
> > On Thu, 10 February 2000, Vijay Gill wrote:
> > > Of course, given that we can get netflow type packet histories, plotting
> > > the src/dest pairs for a while and then if there is a _large_ change (some
> > > n std dev) from the norm for some particular dst (nominally the one under
> 
> > I've wondered what type of statistical sampling could be used to find these
> > attacks, but not require huge amounts of storage.  The theory is these are
> > very large traffic flows which congest the pipe and push other traffic out
> > of the way.  If you sample 1% of the traffic, and 99% of the sample is the
> > same src/dest pair, something may be fishy.
> 
> How about looking at 1 in n packets, for some large value of n, perhaps as
> a percentage of line rate?  This leads into the entire issue of building
> ASICS in the fast path that punt 1 in n out towards some collator
> mechanism, with perhaps the first order data reduction done in the router
> itself before it is handed off. 
> 
> Once again, these things will cost money to build, take time to debug, and
> the entire data collection system will be non trivial to scale. 
> 
> Problems that can be solved given enough talent/time/money, but is anyone
> willing to put forth the effort?
> 
> /vijay
> 
> 
> 
>