North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: botnets: web servers, end-systems and Vint Cerf

  • From: Eric Gauthier
  • Date: Mon Feb 26 11:01:21 2007


> Can you elaborate a bit on what universities have done which would be
> relevant to service providers here?

Generally, we've found that most end users don't even know that their systems 
are infected - be it with spyware, bots, etc - and are happy when we can help
them clear things up as they usually aren't in a position to fix things on their
own.  I know that the really bad analogy of driving a car has been used a few 
times in this thread, but I think part of the analogy is true.  If someone owns
and uses a car but the car has no indicator lights to say that something
is wrong, its hard to believe that the driver will be able to fix the problem
or even know to contact the repair shop.  We've tried to give our users
that "indicator" light and some help repairing it

Most Universities have adopted the general strategies that came out of the 
Internet2/Educause Salsa-NetAuth working group (see links at the end).  This
general type of architecture has network components doing registration, 
detection, end-user notification, host isolation, and auto-remediation.  In
many cases, most of these systems are already in place and they just need
to be tied together.

Where I work, we use a captive-portal like system to do MAC registration
and then, if our detection systems determine a host has an issue, we
force the host back to that captive portal and display a self-help page
for cleaning up the particular problem that the user has with their system.
At the end of the process, we provide a mechanism for them to escape the
captive portal and regain network access automatically.  From the statistics 
that we collect from our tools, we used to see about a third of the Windows 
systems come onto our campus at the beginging of the year with some sort of 
infection, with 90% of those cleaned automatically during our registration 
process (we have an initial cleaning tool for new systems).  Of the systems 
that make it past this first round, 90% appear to be caught by our sensors, 
sent back to the captive portal, and are able to self-remediate using our 
cleaning tool.

Other Universities have similar systems, but invert the "registration" idea.
For example, one place allows open network access until their sensors detect
a problem with a given host.  At that point, the host is logged into their
system with an indication of the problem, and then shunted back to the captive
portal with instructions for cleaning up the system.

As Sean and others pointed out, you need a business case for something like
this.  In our case, we already had a help desk, tools and documentation for 
cleaning up infected systems, a sensor network, web servers, DNS servers with 
Views support, and a DHCP system that easily allowed the mapping of classes of 
MACs into pools.  The cost for us was in adding the database to track things, 
some development time to build the web interface, and some of the hooks that
link everything together.  The hard savings for us came from fewer calls to the 
help desk and fewer incidents for our security team to handle (i.e. less staff
or slower growth in staff).  We also gained the soft benifit from students
believing that the network actually works and works well.

Eric :)

Here are some presentations that I've done:

Defending Against Yourself
Automated Network Techniques to Protect and Defend Against Your End Users
(February, 2006)

Network Architecture for Automatic Security and Policy Enforcement
(Sept 2005)

Life on a University Network: 
An Architecture for Automatically Detecting, Isolating, and Cleaning Infected Hosts
(February, 2004)