North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Minutes for IETF Network Status Reports.

  • From: Gene Hastings
  • Date: Sat Aug 13 15:22:19 1994

GREAT THANKS to Marsha Perrott for chairing  the meeting in my absence and
Rob Reschly for the notes.

Speakers:
Scott Bradner <[email protected]>         CoREN Update
Tim Seaver <[email protected]>              NC-REN (formerly CONCERT)
Susan Hares <[email protected]>             "Transition from NSFnet Backbone
                                              to the NAPland"
Guy Almes <[email protected]>               NSFnet statistics
Eric Carroll <[email protected]>    CA*net
Andrew Partan <[email protected]>        MAE-East Evolution

Supplements and corrections by:
"Steven J. Richardson" <[email protected]>
Susan Hares <[email protected]>
Tim Seaver <[email protected]>
Pushpendra Mohta <[email protected]>
"Martin Lee Schoffstall" <[email protected]>

Thanks,
Gene

  Minutes of the Toronto IETF'30 Netstat Working Group
 ================================================================
submitted to [email protected]
submitted by [email protected]

================
CoREN:
Scott Bradner

The status of the CoREN Network Services Request for Proposals (RFP) process
was briefed.  Scott emphasized one key feature of this RFP:  it will result
in a contract to provide services to the regionals, not in a contract to
build a backbone to interconnect regionals.  Since they are buying a
service, CoREN expects to be one customer among many using the same service.

CoREN does not want to have to rely on the NAPs for everything.  CoREN feels
NAPs and RAs are a good idea, but....

Scott observed that dollars flow from the NSF to the Regionals to fully
connected network service providers (NSPs) to the NAPs.  The only NSPs
eligible to provide connectivity paid for by NSF funding are those which
connect to the all primary NAPs (NY, IL, CA).

The CoREN provider will establish connectivity to all primary NAPs,  MAE-
East, and the CIX.

Scott was asked about planned NOC responsibilities:  NOC integration and
coordination is being worked on. Discussion points are relative
responsibilities, e.g. NEARnet vs CoREN provider hand-off.

When asked for information on non-CoREN American provider plans, Scott knew
of at least two providers who will be at other NAPS. Scott indicated MCI
will be at the Sprint NAP soon.  Others later.

As for the CoREN RFP evaluation, more than one of proposals was pretty close
from a technical perspective, and they were close financially. The selected
provider came out ahead in both measurements and additionally offered to
support a joint technical committee to provide a forum for working issues as
they arise. In particular, early efforts will focus on quantifying QOS
issues as they were intentionally left out of the specification so they can
be negotiated as needed (initially and as the technology changes).

The circuits are coming in and routers (Cisco 7000s) are being installed in
the vendor's PoPs this week. First bits will be flowing by 1 August. Line
and router loading and abuse testing is expected to commence by 15 August,
and production testing is should be underway by 15 September. Cutover is
expected before 31 October.

Someone noted there may be some sort of problem related to route cache
flushing in the current Cisco code which could impact deployment.

================
NC-REN (formerly CONCERT):
Tim Seaver

CONCERT is a statewide video and data network operated by MCNC.
  - primary funding from State of NC
  - primary funding from State of NC
  - currently 111 direct, 32 dialup, and 52 uucp connections
  - 30K+ hosts
  - 4.5Mbps inverse multiplexed 3xDS1 link to ANS pop in Greensboro, NC

Replaced by NC-REN
  - expands to North Carolina Research and Education Network
  - DNS name is changing from concert.net to ncren.net

Service changes:
  - dropping commercial services
  - concentrating on R&E
  - focus on user help

Main reason for name change:
  - British Telecomm and MCI wanted the CONCERT name. MCNC never registered
     CONCERT.

In return MCNC management wanted:
  - NC service community more prominent
  - alignment with NREN
  - emphasis on R&E

Press release 15 April
  Conversion to ncren.net in progress
  - Domain registered February 1994
  - Local changes simple but time-consuming
  - Remote changes hard and time consuming
  - Targeting 1 October completion fairly sure of conversion by 31 October
  - Decommission CONCERT by 1 January 1995

Existing service problems:
  - Help desk overloaded from dialup UNIX shell accounts
  - Commercial providers springing up everywhere
  - The Umstead Act (a NC state law) says state funds cannot subsidize
     competition with commercial services.
  - CONCERT had sufficient non-governmental funding to cover commercial
     services, but accounting practices could not prove separation so they
     just decided to just stop.

Service changes
  - Turned over dialup UNIX shell connectivity to Interpath March 1994
  - Planning to stop providing commercial IP and UUCP services by October
     1994
  - Planning to stop providing commercial direct services by 1 January 1995
  - Will continue direct connects, IP, UUCP for government, research and
     education customers.

Plans:
  - Pursuing new R&E customers:
      Remaining private colleges
      Community colleges
      K-12 schools
      State and local government
      Libraries
  - Providing security services:
      firewalls, Kerberos, PEM, secure DNS, secure routing.
  - Expanding information services:
      m-bone, NC state government documents, WWW services, and consultation
        -- to provide more access
  - Internet connection will be upgraded to 45Mbps October, 1994
  - Work on a NC Information Highway (NCIH)

In response to a question about NC microwave trunking he noted that the
Research Triangle Park area is at 45Mbps and remote areas are at 25Mbps.

In passing he noted ATM interaction with research community is an
interesting opportunity, indicating Southern bell GTE and Carolina telephone
working ATM infrastructure

In response to a question about the number of sites changing to NC-REN he
stated there were about 20 R&E direct connections which would move, and that
the narrowed focus of the NC-REN would not change the cash flow model
significantly.


================
"Transition from NSFnet Backbone to the NAPland":
Sue Hares

Available via WWW at URL: http://rrdb.merit.edu

If mid-level networks want to send Sue information concerning any  aspects
of plans to transition, please do.  Also indicate what can be published
(this second permission is hard) -- Sue will respect confidentiality
requirements.  They desperately need information about local and regional
plans so they can manage the transition for NSF.

NOTE: The following is incomplete because Sue went through it very quickly.
However, as a teaser if nothing else, some of the information on the slides
available at the above URL is included below, as well as most of the
significant discussion....

NAP online Dates:
  Sprint NAP  11 August
  PacBell     mid-September
  Ameritech   26 September

Currently scheduled NSFnet service turn-down.  Note this does not say
anything about tangible infrastructure changes, only NSFnet service plans.
That is, NSF says they intend to stop paying for the forwarding of traffic
via the indicated ENSSs, no more, no less:

Category 1 CoREN (numbers are ENSSs): (first round)
  BARRnet     128
  SURAnet     138 136
  SESQUInet   139
  MIDnet      143
  CICnet      130 129 131
  NYSERnet    133
  NEARnet     134
  NWnet       143

In conversation it was reported that PREPnet is not to use PSC connection
for access after 1 October.

The real message is that these and following numbers are "official
notification" for management planning.  It was recommended to "flick the
lights" before actual turn-off -- i.e. install the replacement connectivity
and turn off the NSFnet connection to see what breaks.

Again Sue pleaded for information as it becomes available and permission to
announce it as soon as possible.

Category 2 Regional ENSSs
  Argonne     130
  PREPnet     132
  CA*net      133 143 137
  ALTERnet    134 136
  PSI         136 133 (133 will not be configured here after "the
                       NYSERnet/PSI split is fully consummated.")
  JvNCnet     137
  THEnet      139

Category 3 Regional ENSSs
  MICHnet     131


NOTE: More complete information concerning the above is available online.

Sue reiterated that the "decommissionings" are simply organization's status
as recipient of NSFnet services.  It would be a good idea for each affected
organization to talk to any or all service providers between the
organization and the NSFnet for details about other aspects of the
connection.

As for the differences between between the categories; category 1 is
primarily CoREN, category 2 is the other regionals, and category 3 includes
supercomputer sites and less firmly planned sites.

More information welcomed:
  Anyone got a contract from NSF?
  Anyone want to tell Sue their NSP?
  Got some private announcements, need more.

Want information to forward to NSF even if not public.  Will respect
privacy, but important to inform NSF even if caveated by "may change
because..."...

When asked about the time-lines for the various categories, it was stated
that NSF wants to have the category 1 sites switched off the NSFnet by 31
October.  Beyond that, it is currently phrased as a best effort task.

There was some discussion about CoREN test and transition plans:  Note that
load and trans-NAP plans are still being worked.  There appears to be
significant concern about not taking any backwards steps.

One proposed working bilateral testing agreement. This provoked discussion
of a tool called offnet* (and some nice tools Hans-Werner Braun has
written).  Some or all of these tools will be made available by Merit,
however it was stressed that use [of these tools] by the regionals is
intended to instrument local sites, and Merit cannot allow additional
connections to NSFnet backbone monitoring points.


* [OFFNET was/is a program which tracked/tracks the nets which are
configured but not heard.  This is used by Enke Chen ([email protected]) in
generating reports about the number of configured vs. heard nets (difference
= "silent nets").  There is a constantly-increasing number of nets which
have been configured but are not actually announced to the NSFNET. -SJR

Anyone wanting the OFFNET code should contact Susan Hares <[email protected]>
and cc: [email protected] - skh]

================
NSFnet statistics:
Guy Almes

Traffic is still doubling!  Traffic topped 70 Gigapackets per month in May
and June.

Guy noted that December 94 chart will be interesting -- how to measure, and
what makes sense to measure, new in backboneless regime.  There will be a
transition from traffic into backbone to traffic into multiple whatevers.
Should any resulting numbers be counted? It was observed that it would be
hard to avoid double counting in such an environment.

The general consensus was that there is a need to pick an appropriate set of
collection points:  e.g. transition from BARRnet to/from NSF to BARRnet
to/from CoREN provider.

One position contends that we really want customer to BARRnet data rather
than BARRnet to CoREN provider. However it was observed that this is not
tractable or trackable.

Other statistics show:
  952 Aggregates currently configured in AS690
  751 announced to AS690
  6081 class based addresses represented

There were two additional slides depicting: 1)IBGP stability: solid line is
percentage of IBGP sessions which have transitions during the measurement
intervals, and 2) External route stability: solid line is external peers.

Data collection is once again in place on backbone and has been operational
since 1 June.

In conversation, it was noted that the Route Servers will be gathering
statistics from the NAPs.  The Route Servers will be gated engines and will
be located at the NAPs


UPDATES:
ANS router software activity
  Software enhancements:
      RS960 buffering and queueing microcode updated
       - increased number of buffers, also went from max MTU sized buffers
          to 2+kB chainable buffers (max FDDI will fit in two buffers with
          room to spare.
       - dynamic buffer allocation within card
       -- two together really improve dynamic burst performance
      Design for improved end-to-end performance
       - Based on Van Jacobson and Floyd random early drop work.
       - End-to-end performance is limited by bandwidth delay product
       - current protocols deal gracefully with a single packet drop but
          multiple packets dropped push algorithm into slow start.  With
          "current" van Jacobson code, even brief congestion in the path
          will cause things to back off under even low end loadings.

Work shows that Random Early Drop slows things just enough to avoid
congestion without putting particular flows into slow-start.

In passing, Guy noted that he figures the speed of light as roughly 125
mi/ms on general phone company stuff.

The conditions and results were summarized on two slides:

  + Single flow Van Jacobson random early drop:
      41Mbps at 384k MTU cross-country (PSC to SDSC?)
      This code (V4.20L++) is likely to be deployed in a month or so.

By way of comparison Maui Supercomputer center to SDSC was 31Mbps using an
earlier version of code with 35 buffers.  Windowed ping with the same code
did 41Mbps.

  + Four flow Van Jacobson random early drop:
      42Mbps at 96kB MTU.
      All the numbers are with full forwarding tables in the RS960s

In other news...:
  + SLSP support for broadcast media completed
  + Eliminated fake AS requirement for multiply connected peers.
  + Implemented IBGP server.

Pensalken (the SPRINT NAP) is a FDDI in a box.

================
CA*net:
Eric Carroll

All but three backbone links are now at T1 and there are dual T1s to each US
interconnect.

Pulled in Canadian government networks.  Using Ciscos to build network.

Still seeing 8-10x US costs for service.  CA*net will grow to DS3 when they
can get it  and afford it(!).

Numbers on map slide are percentage utilization.  Note that 12 routers were
installed between mid-March and the end of April and these are early
numbers.  Note that the British Columbia to NWnet link T1 went to saturation
in 5 hours. Appears to be due to pent up demand, not particular users or
programs.

7010 roll-out had a lot of support from Cisco.  Ran into some problems with
FT1 lines in queuing discipline.

Still doing NNSTAT on an RT for now, but working with a RMON vendor to get
stuff for new implementation.

When asked about using inverse multiplexors for increased bandwidth, Eric
indicated CA*net was currently just using Cisco's load sharing to US,
however they would be considered when needed.

A question was raised about CA*net connectivity plans in light of the
impending NSF transition.  Currently international connectivity is just to
US, specifically to the US R&E community.  There is some interest and
discussions for other international connectivity, but cost and other factors
are an issue.

CA*net hopes to place NSF connectivity order by next week.

Biggest concern is the risk of becoming disconnected from what Eric termed
the R&E affinity group.

CA*net currently carries ~1000 registered ~900 active networks in CA*net.

CA*net is not AUP free, instead it is based on a transitive AUP "consenting
adults" model. If two Canadian regionals or providers agree to exchange a
particular kind of traffic then CA*net has no problem.

CA*net just joined CIX which prompted a question as to whether Onet is a CIX
member.  In response Eric characterized CA*net as a cooperative transit
backbone for regional members.  Therefore CA*net joining CIX is somewhat
meaningless in and of itself, and, by implication, is only meaningful in the
context of the regionals and providers interacting via CA*net.

In response to another question, Eric indicated that CA*net is still seeing
growth.


================
MAE-East Evolution:
Andrew Partan

(MAE == Metropolitan Area Ethernet)

Andrew volunteered to conduct an impromptu discussion of MAE-EAST plans

There is an effort underway to install a FDDI ring at the MFS Gallows Rd PoP
and connect that ring to MAE-East using a Cisco Catalyst box.

MAE-East folks are experimenting with GDC Switches

Is there a transition from MAE-East to the SWAB?:  Unknown

(SWAB == SMDS Washington [DC] Area Backbone)

MFS DC NAP is proposing to implement using NetEdge equipment.

Any MAE-East plans to connect to MFS NAP?:  Unknown.

ALTERnet is currently using a Cisco Catalyst box and is happy.

Time-frame for implementing MAE-East FDDI?:  Not yet, still need management
approval.  Hope to have a start in next several weeks..

Those interested in MAE-EAST goings-on and discussions with members should
join the mailing list MAE-East[-request]@uunet.uu.net

For what it may be worth, they "had to interrupt MAE-LINK for 5 seconds this
week to attach an MCI connection".

In summary (to a question) one would contract with MFS for connectivity to
MAE-East.  Then one would need to individually negotiate pairwise
arrangements with other providers with which there was an interest in
passing traffic.  As far is as known there are no settlements currently, but
cannot say for sure.

================
Random Bits:

SWAB (SMDS Washington Area Backbone): In response to point of confusion, it
was stated that the SWAB bilateral agreement template is just a sample, not
a requirement

CIX:  The CIX router is getting a T3 SMDS connection into the PacBell
fabric.  ALTERnet and PSI are doing so too.  CERFnet currently is on.

Noted in passing: Each SMDS access point can be used privately, to support
customers, to enhance backbone, etc....  This could have serious
implications for other provider agreements.

CERFnet:  Pushpendra Mohta is reported to be happy, but the group understood
that most CERFnet CIRs are at 4Mbps over T3 entrance facilities.  PacBell
was reportedly running two 2OOMbps backplane capacity switches
interconnected with single T3.  Planning to increase provisioning -- already
have a lot of demand.

[Pushpendra adds: Pacbell operates two switched in the Bay Area. One in San
Francisco and one in Santa Clara.  The former is practically full , and the
latter is brand new. All new T3 orders will end up on the Santa Clara
switch. It IS true that the backplane of the switch is only 200Mbps.

Because the Santa Clara switch is new, the switches are interconnected by
only one T3 link. However, the switches are capable of more than one T3 link
and the product manager at Pacbell (Dick Shimizu ) has assured me that
enough demand would warrant a new T3 between the switches etc.

Providers thinking of buying T3 level services should specify the Santa
Clara switch although it should end up being used anyway. I have alerted the
Product manager and he will ensure that T3 circuits are on the SC switch.

A new switch is being planned for early next year, although enough demand
will accelerate that deployment as well.
]

[Addendum from PSI:
In addition, PSI installed an SMDS switch in Santa Clara several weeks ago
which has a gigabit backplane.

So, if there is a problem with CIX SMDS throughput there is a "net", using
PtP T3's and mutliple SNI's on the CIX (PacBell) SMDS connection, remapped
into another (PSI's) switch.

Marty
]



- - - - - - - - - - - - - - - - -