North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Is there a line of defense against Distributed Reflective attacks?

  • From: John Kristoff
  • Date: Fri Jan 17 04:18:56 2003

On Thu, Jan 16, 2003 at 08:48:03PM -0500, Brad Laue wrote:
> Having researched this in-depth after reading a rather cursory article
> on the topic (http://grc.com/dos/drdos.htm), only two main methods come
> to my mind to protect against it.

There are a few more methods, some have already mentioned including
something called pushback.  Very few solutions, particularly elegant
ones are widely deployed today.

At some point, sophisticated (or even not so sophisticated) DoS
attacks can be hard to distinguish between valid traffic, particularly
if widely distributed and traffic is as valid looking as any other
bit of traffic.

> By way of quick review, such an attack is carried out by forging the
> source address of the target host and sending large quantities of
> packets toward a high-bandwidth middleman or several such.

It doesn't have to be forged, that step just makes it harder to
trace back to the original source.  There are some solutions that
try to deal with this, including an IETF working group called
itrace.  UUNET also developed something called CenterTrack.  BBN
has something called Source Path Isolation Engine (SPIE).  There
are probably other things I'm forgetting, but generally are similar
in concept to these.

> To my knowledge the network encompassing the target host is largely
> unable to protect itself other than 'poisoning' the route to the host in
> question. This succeeds in minimizing the impact of such an attack on

This is true, the survivability of the victim largely depends on
the security of everyone else, which makes solving the problem so
exceptionally difficult.

> the network itself, but also acheives the end of removing the target
> host from the Internet entirely. Additionally, if the targetted host is
> a router, little if anything can be done to stop that network from going
> down.

I'm not sure I fully understand what you're saying here, but a router
can be effectively be taken out of service as any other end host or
network can by simply overwhelming it with packets to process (for itself
or to be forwarded).

> One method that comes to mind that can slow the incoming traffic in a
> more distributed way is ECN (explicit congestion notification), but it
> doesn't seem as though the implementation of ECN is a priority for many
> small or large networks (correct me if I'm wrong on this point). If ECN

ECN cannot be an effective solution unless you trust all edge hosts,
including the attacking hosts, will use it.  Since it is a mechanism
that is used to signal transmitting hosts to slow down, attackers can
choose not to implement ECN or ignore ECN signals.  Unless you could
control all the ends hosts, and as long as there is intelligence in
the end hosts a user could modify, this won't help.

> is a practical solution to an attack of this kind, what prevents its
> implementation? Lack of awareness, or other?

It is still fairly new and not widely deployed.  Routers need not only
to support it, but also have to be enabled to use it.  It is a fairly
significant change to the way congestion control is currently done in
the Internet and it will take some time before penetration occurs.

> Also, are there other methods of protecting a targetted network from
> losing functionality during such an attack?

Many are reactive, often because you can't know what a DoS is until
its happening.  In that case, providers can use BGP advertisements
to blackhole hosts or networks (though that can essentially finish
the job the attacker started).  If attacks target a DNS name, the
end hosts can change their IP address (though DNS servers may still
get pounded).  If anything unique about the attack traffic can be
determined, filters or rate limits can be placed as close to the
sources as possible to block it (and that fails as attack traffic
becomes increasingly dispersed and identical to valid traffic).  If
more capacity than attack traffic uses can be obtained, the attack
could be ignored or mitigated (but this might be expensive and
impractical).  If the sources can be tracked, perhaps they can be
stopped (but large  number of sources make this a scaling issue and
sometimes not all responsible parties are as cooperative or friendly
as you might like).  There is also the threat of legal response, which
could encourage networks and hosts to stop and prevent attacks in the
future (this could have negative impacts for the openness of the net
and potentially be difficult to enforce when multiple jurisdiations
are involved).

>From a proactive approach, hosts could be secured to prevent an
outsider from using it for attack.  The sorry state of system
security doesn't seem to be getting better and even if we had perfect
end system security, an attacker could still use their own system(s)
to launch attacks.  Eventually it all boils down to a physical
security problem.  Pricing models can be used to make it expensive
to send attack traffic.  How to do the billing and who to bill
might not be so easy.   ...and there may always be a provider who
charges less.  Rate limits can be used on a per source, per protocol
or per flow basis.  Given enough hosts and not enough deployment in
the network, this has yet to be effective.  Similarly, network
based queueing mechanisms (e.g. RED), or pushback approaches already
mentioned, which penalize or limit high rate flows are not widely
deployed yet.

It often takes a combination of techniques used by as many people
as possible.  Good incident response teams for as many responsible
network operators as possible and continued, quicker deployment of
best practices to as many network operators, end users and systems
developers as possible.

...or you architecturally change the Internet or not use the Internet.
For example, go back to dumb end systems and place all the control
into the network operated by a select few (e.g. the traditional
telephone model).  You potentially lose all the good properties the
current architecture has and you aren't going to get everyone to
change with you anytime soon.

I highly recommend Bruce Schneier's book Secrets and Lies, which
applies to so much of all these problems and gives you a lot more
to think about in a much more readable way than what I have said
here.  It is especially insightful with regards to all the non-
technical problems and non-technical responses to the problems.

John