North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Fwd: cnn.com - Homeland Security seeks cyber counterattack system(Einstein 3.0)

  • From: Patrick Darden
  • Date: Tue Oct 07 15:18:18 2008


J. Oquendo wrote:
Too many companies and individuals rely far
too heavily on a false and outdated concept of the definition of
"minimum requirements" when it comes to security. They tend to
think they need to implement the minimum requirements and all will
be fine. This is evident in almost all security management material
I read where the goal is to offer a "mininum" set of requirements
to meet guidelines and regulatory controls.

What about exceeding the minimum requirements for a change.

What about an entirely different concept? I see a lot of network router/firewall admins make the mistake of closing certain known bad ports off. This mostly happens in a University-type situation, where it is necessary--or at least traditional--to have an open network. A network able to handle myriad new and changing protocols and services. This is the black-list approach. It is a fundamental approach to security that ends up with "minimum requirements" either met or exceeded, without any real effectiveness no matter what certain experts may claim.


The acknowledged better path is using a white-list instead. Turn everything off by default. Turn off all ports on the router/firewall. Turn the ones back on that can be trusted, with as much control as you can throw in there--specifying endpoints and ports, using content inspection and ensuring protocols using higher layer proxy-type protocols. Modern firewalls can do all of this.

This would lead to "maximum possible" security, regulated only by realities. Layer 9 and 10 being the biggies, although layer 1 and 2 are also important (money and politics).

This would not work in an open environment with 30,000 new laptops coming in at the start of every summer, each running a different brand of Doom (pun intended). But if we are talking about a smaller number of stable networks that are meant primarily to interface with one-another and only network outside of themselves... (wait for it, not secondarily, not tertially, not even quartnearilly but instead) perhaps as the least important function, then we have something we can work with. These networks would be of Working machines. Primary purpose: work. Stability, functionality, security of data and communications....

Here you go, my incredibly naive take on it:

0. white list as the fundamental principle. maximum security.
1. you are starting with a mess. turn off all internetworking on a network, until it is compliant with the below.
2. separate the networks into discrete logical units (via function would be best, if realities such as location/bandwidth permit).
3. separate the workstations.
4. harden the workstations. turn off extra services. only install certain programs. make an image. shoot that image down every now and then to ensure compliance.
5. harden the networks. allow communication between networks only for certain services. specify endpoints and ports, use content inspection ensure protocol regulation. check logs for unregulated attempts to communicate between networks.
6. make sure you have adequate pc/networking/security admins to do this--and maintain it. Keeping it all up to date will be a big part of making sure it stays functional.
7. probably this should be #1 instead of #7--start with clear documentation for each of the above points, including assignation of responsibilities with job titles.


--Patrick Darden