North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Worst Offenders/Active Attackers blacklists

  • From: Patrick W. Gilmore
  • Date: Tue Jan 29 15:58:05 2008


On Jan 29, 2008, at 3:28 PM, Andrew D Kirch wrote:
Patrick W. Gilmore wrote:
On Jan 29, 2008, at 9:43 AM, Jim Popovitch wrote:
On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <[email protected]> wrote:
A general purpose host or firewall is NOTHING like a mail server.
There is no race condition in a mail server, because the server simply
waits until the DNS query is returned. No user is watching the mail
queue, if mail is delayed by 1/10 of a second, or even many seconds,
nothing happens.


Now magine every web page you visit is suddenly paused by 100ms, or
1000ms, or multiple seconds?  Imagine that times 100s or 1000s of
users.  Imagine what your call center would look like the day after
you implemented it.  (Hint: Something like a smoking crater.)

There might be ways around this (e.g. zone transfer / bulk load), but
it is still not a good idea.


Of course I could be wrong.  You shouldn't trust me on this, you
should try it in production.  Let us know how it works out.

Andrew, IIUC, suggested that the default would be to allow while the check was performed.

I read that, but discounted it. There has been more than one single-packet compromise in the past. Not really a good idea to let packets through for a while, _then_ decide to stop them. Kinda closing the bard door after yada yada yada.


I don't disagree with this, but I'm also noting that this is not the universal fix to everything wrong with the internet. I'll also note that there is in fact no such thing. You also don't get to discount, and then assault my position without a reasonable position for discounting it. One packet exploits will compromise the host whether this system is running or not. This is the case with any security system, as there is no "one size fits all" solution. So one might say that this system is not intended to deal with such an event, and that there are already methods out there for doing so. Such services should be firewalled off from a majority of the internet (MSSQL, SSH, RPC anyone?). If you aren't firewalling those services which are notoriously vulnerable to a one packet compromise, maybe I should suggest that this is the problem, and not the DNSBL based trust system that wasn't designed to stop the attack? Network security is, as always a practice of defense in depth.

Of course there is no One Final Solution. However, each solution necessarily must be more good than bad. This solution has _many_ bad things, each of which are more bad than the solution is good.


For instance, this creates and instant and trivial vector to DDoS the name servers doing the DNSBL hosting.


Perhaps combine the two? Have a stateful firewall which also checks DNSBLs? I can see why that would be attractive to someone, but still not a good idea. Not to mention no DNSBL operator would let any reasonably sized network query them for every new source address - the load would squash the name servers.

I don't have a disagreement here, but zone transfers are easy to set up.

Sure they are, but zone transfers, while not as bad as individual lookups, are still a bad idea IMHO. For instance, are you sure you want your dynamic filters 30 or 60 minutes out of date?


BGP was discussed, but such feeds already exist and do not require a firewall.


Anyway, as I have said multiple times now, you don't have to believe me. Please, set this up and tell us how it works. Your real-world experience will beat my mailing list post every time.


--
TTFN,
patrick



As I mentioned, zone transfer the DNSBL and check against that might add a modicum of usefulness, but still has lots of bad side effects.

Then again, what do I know? Please implement this in production and show me I'm wrong. I smell a huge business opportunity if you can get it to work!