North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: AW: Odd policy question.

  • From: David W. Hankins
  • Date: Fri Jan 13 16:49:48 2006

On Fri, Jan 13, 2006 at 10:09:51AM -1000, Randy Bush wrote:
> > it is a best practice to separate authoritative and recursive servers.
> 
> why?

I'm not sure anyone can answer that question.  I certainly can't.
Not completely, anyway.  There are too many variables and motivations.

Some remember to say "Read RFC2010 section 2.12."

But since that's a document intended specifically for root server
operation, it's not as helpful to those of us that don't operate
roots.

This is about like saying, "Because Vixie wrote it."

> e.g. a small isp has a hundred auth zones (secondaried far
> away and off-net, of course) and runs cache.  why should
> they separate auth from cache?

Well, RFC2010 section 2.12 hints at cache pollution attacks, and that's
been discussed already.  Note that I can't seem to find the same claim
in RFC2870, which obsoletes 2010 (and the direction against recursive
service is still there).


But in my own personal experience, I can still say without a doubt that
combining authoritative and iterative services is a bad idea.

Spammers resolve a lot of DNS names.  Usually in very short order.  As
short as they can possibly manage, actually.  The bulk of the addresses
they have on their lists aren't even registered domain names.

Resolving some of these bogus domain names uses substantially more CPU
than you might think (spread out over several queries).

The result, at a previous place of employ that did not segregate
these services, was that our nameservers literally choked to death
every time our colocated customers hit us with a spam run.

The process' CPU utilization goes to 100%, queries start getting
retransmitted, and pretty soon our authoriative queries start getting
universally dropped because they're the vast minority of traffic in the
system (or the answer comes back so late the client has forgotten it
asked the question - has already timed out).

So if someone on our network was using our recursive nameservers to
resolve targets to spam, people couldn't resolve our names.

Even though our servers were geographically diverse, they were all
recursive - the miscreant clients would spam them all in harmony.

I guess you could say it made it easy to find and shut these miscreants
down.

But I'd much rather 'spammer detection' be based on something that does
not also take my own network down.


Now, certainly, designing a network around being impervious to "our
clients: the spammers" is not a strong motivation for everyone.  But it
doesn't take a spammer to see the same series of events unfold.  It can
just as easily be...say...a lame script in a server...handling error
conditions badly by immediately retransmitting the request (we got this
too - it flooded our servers with requests for a name within their own
domain without any pause inbetween...we kept having to call this customer
to reboot his NT box, putting their address space in and out of our
ACL's...a significant operational expense, and outages that affected the
majority of our customers...for a small colocation customer (not a
lot of cash)).

So I think this is pretty valid advice for pretty much anyone.  It's
just a bad idea to expose your authoritative servers to the same
problems an iterative resolver is prone to.

-- 
David W. Hankins		"If you don't do it right the first time,
Software Engineer			you'll just have to do it again."
Internet Systems Consortium, Inc.		-- Jack T. Hankins

Attachment: pgp00009.pgp
Description: PGP signature