North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

RE: How to achieve application reliability

  • From: Roeland M.J. Meyer
  • Date: Sun Dec 05 13:48:45 1999

> From: Sean Donelan, Sent: Sunday, December 05, 1999 8:37 AM

> For people with ultra-high reliablility requirements, a /19 isn't the
solution.

But, from the discussion, a /19 would be part of the requirement, along with
some form of LBDNS, like Resonate. Yes, the Net can get blackholed, but
isn't that an error condition anyway? One that must be fixed?

Also, we haven't discussed the security implications much. Keeping the rest
of this in mind, those of us using tcp_wrappers, SSH, and other IP-based
filtering, would have our lives simplified greatly if we had a common IP
block that covered the entire domain, regardless of which provider a host is
located in. VPNs would also be much less expensive to maintain. Simpler
security == tighter security.

The issue is that there is no way that the cases I have before me should
need more than a /24, even if /20 requirements can be met by host count.
Most of the hosts would be in NAT'd space anyway, for security reasons I
shouldn't need to go into here. In fact, I could architect both domains into
a /25, or even a /26, if I had to, using NAT'd space (it would be a tight
fit with no growth allowance).

To review the cases, I have two instances where I have geographically
separated sites that need a common IP-block, with multiple providers. I have
an interim solution now, using an SSH VPN, but the maintenance is killing
me. Not to mention the fact that I still haven't reduced the packet load on
the primary site. On one of the sites, distribution of the packet load is
ONE of many reasons for the geo-physical separation. Ergo, that dog doesn't
scale here. In fact, it could DOUBLE traffic at the primary site and add
encryption processing load as well. As an aside, IPSEC (as one person had
suggested) would not be an improvement, wrt packet traffic, at the primary
site (while the implementation is slightly different, the general effects
are the same).

The only apparent route left would be to burn /24s at each end, even though
I don't need them. I got no problem with that, but I thought that the
priority goal was conservation of IP space.

BTW, I obviously have sympathy for a movement that would generate an RFC for
CONSISTENT route filtering policies.