North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: mSQL Attack/Peering/OBGP/Optical exchange

  • From: David Diaz
  • Date: Tue Feb 04 15:56:08 2003


Well the feedback onlist and extensive offlist was great. The respondents seem to feel that because of the rapid onset of the attack, an dynamically allocated optical exchange might have exacerbated the problem. But this is also the benefits, it allows flexible bandwidth with a nonblocking backplane. So backbones with a critical event such as a webcast have the capacity they need when they need it. A common shared backplane architecture might provide a nature bottleneck. One can also see this as a possible growth problem the rest of the time.

Respondents strayed away from the specific subject of the dynamics of the Optical exchange under an mSQL type attack and went into the pros and cons. The number one topic: Billing

Billing was also the biggest challenge in implementation of the technology. Once the ability was there, and the real world tests showed this technology was actually functional. No one was exactly sure of the business algorithm to charge by. Most commentators were concerned about losing billing control. That a peer (possibly under attack) may actually cause fees to be assessed to your own backbone. It must be understood that your network must give approval for this to happen. And if you have CNM (customer network management) enabled and even running on a screen in your noc, u are aware immediately when this happens. Without that, you have your specific peer locked down to whatever size pipe you have chosen.

On the billing, it might be flat rate with the ability to "burst" to a higher sized capacity. Perhaps this is a flat rate charge, or would allow you to burst a certain amount of hours etc. No one has gotten a clear picture. The simplest answer is probably to do, as was mentioned, a similar scheme as in IP. Bill to the 95th percentile. It seems fair. Use a multiplier of DS0s per hour x $ and go with that. You might even lock it down so that at a certain $ figure, no more bursting is allowed. I do not like that kind of billing to network control, but it would seem that CFOs would demand some kind of ceiling limit.

As far as oscillation between protection scheme in different layers. This has been a problem with things like an IP over ATM network. It should not be a problem and there has been a lot of testing. It is true the possibility for thrashing is there but probably not at sub 50ms layers. We have that now over sonet private peering circuits. But even in a metro wide optical exchange scheme, the two farthest points on the mesh being ~100 miles, reroute time was 16ms. Those are the real world tests when we were testing the network as we were breaking routes.

There were some discussions of rule sets. No conclusions. filters should probably be left to the backbones with very little control at the optical layer (IX). The only rule sets might be to service levels or billing.

David






At 9:11 +0100 2/4/03, Kurt Erik Lindqvist wrote:
Actually, I think that was the point of the dynamic provisioning ability. The UNI 1.0 protocol or the previous ODSI, were to allow the routers to provision their own capacity. The tests in the real world done actually worked although I still believe they are under NDA.

The point was to provision or reprovision capacity as needed. Without getting into the arguments of whether this is a good idea, the point was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it."

If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade???

The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4 walls of a colo. If it's a beyond the 4 walls play then there should be spare capacity available that normally serves as redundancy in the mesh.

The other interesting factor is that now you have sort of aTDMA arrangement going on( very loose analogy here). In that your day can theoretically be divided into 3 time zones.

In the zone:
8am - 4pm ----- Business users, Financial backbones etc
4pm -12am ----- Home users, DSL, Cable, Peer to Peer
12am - 8am ---- Remote backup services, forgein users etc

Some of the same capacity can be reused based on peer needs.

This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that.

You are right this is a future play. But though it was interesting from the perspective of what if all this technology was enabled today, what affect would the mSQL worm have had. Would some of these technologies have exacerbated the problems we saw. Trying to get better feedback on the future issues, so far some of the offline comments and perspectives have been helpful and inciteful as well as yours...

Well the problem with optical bandwidth on demand is that you will have to pay for the network even when it isn't being used. Basically you have three billing principles, pay per usage, pay for the service, a mix of the two. With all the models you still need to distribute the cost over bandwidth and in worst case this will end up being higher per transfered data.

- kurtis -
--

David Diaz
[email protected] [Email]
[email protected] [Pager]
www.smoton.net [Peering Site under development]
Smotons (Smart Photons) trump dumb photons