^ Top

NANOG 14 Agenda

Presentation File Key:

Windows Media video, requires Windows Media Player to view. 

Real Video, requires Real Player to view. 

PDF Document, requires Adobe Acrobat Reader to view/print.

Sunday, November 8 1998
Time/Webcast:Room:Topic/Abstract:Presenter/Sponsor:Presentation Files:
6:30pm - 7:15pm Free Software for Configuring Routers and Monitoring RoutingSpeakers:
  • Mukesh Agrawal, IPMA/U-M.
  • Abha Ahuja, Merit Network.
  • Jimmy Wang, IPMA/U-M.
7:15pm - 8:45pm 

Tutorial: Good ISPs Have No Class: Addressing Nuances and Nuisances

This tutorial reviews some of the more subtle points of CIDR, aggregation, and renumbering. Included are tricks and techniques that the newer ISP might need, including pure address administration and procedures for submitting address space justifications. Hank Nussbacher's CIDR FAQ is a prerequisite for the session.

View full abstract page.
  • Howard Berkowitz, none.
pdfHoward Berkowitz Presentation Slides(PDF)
8:45pm - 9:00pm 

Tutorial: Optimal External Route Selection: Tips and Techniques for ISPs.

Tips for ISPs on external route selection, including the BGP MED and LOCAL_PREF attributes, peering at multiple locations; backup transit; and how to mix transit, public, and private peering.

View full abstract page.
  • Avi Freedman, Net Access.
pptAvi Freedman Presentation(PPT)
Monday, November 9 1998
Time/Webcast:Room:Topic/Abstract:Presenter/Sponsor:Presentation Files:
9:00am - 9:15am Welcome, Introductions, Future MeetingsSpeakers:
  • Craig Labovitz, Merit Network.
pptLabovitz Presentation(PPT)
9:15am - 10:45am Alternative Interconnects: Technology, Attachment Requirements, and Performance MeasurementsModerators:
  • Bill Manning, ISI.
  • Steve Feldman, none.
  • John Meylor, Cisco Systems.
  • Christian Nielsen, none.
  • Bill Norton, Equinix.
  • Jeremy Porter, none.
  • Dave Siegel, none.
  • David Thomas, none.
youtubeAlternative Interconnects
pptBill Norton Presentation(PPT)
pptJohn Meylor Presentation(PPT)
10:45am - 11:00am Break
11:00am - 11:30am 

Who You Gonna Call?

A review of methods for inter-provider communications and their effectiveness. Donelan reviews past problems and speculates about future trends and possible solutions. Are there methods to assure communications will work under unusual conditions? What role should ISPs play in critical infrastructure protection planning?

View full abstract page.
  • Sean Donelan, Data Research Associates.
youtubeWho You Gonna Call?
11:30am - 12:00pm 

Interesting Peering Activities at the Exchange Points

During the summer of 1997, when I was working on Internet peering issues at iMCI, I had a chance to help track down a couple of unusual peering activities. These involved rewriting eBGP next hops to some NAP routers; passing third party next hops; pointing default; and registering incorrect DNS names for NAP routers. Some of these activities were due to a misconfiguration, such as running IGP protocols over NAP FDDIs or turning on native IP multicasting on the NAPs. <BLOCKQUOTE> <B>Case #1:</B> Rewriting eBGP Next hops At this time iMCI had two routers at MAE-East, called cpe2 and cpe3. I was informed by the NOC that the cpe2 FDDI inbound was very congested. I turned on the Netflow feature on iMCI\'s routers and found that 15% of the incoming traffic from cpe2 was unaccounted for. This meant that the traffic was coming from someone iMCI did not peer with at MAE-East. Further analysis showed that almost all the traffic was coming from a single subnet. <BR><BR> The BGP routing table showed that that subnet was 3 AS hops away from iMCI. iMCI had established private peering with ISP-1, ISP-1 peered with ISP-2, and ISP-2 peered with ISP-3, which owned the subnet. iMCI did not peer with any of these ISPs at the NAPs, so the only way ISP-3 could point next hop to iMCI was to rewrite the next hop by matching iMCI\'s AS number in its eBGP routes. <BR><BR> I placed an ACL packet filter on cpe2 to block traffic from ISP-3. Luckily. ISP-3 only had a single block of addresses, which made it possible to do packet-level filtering. ISP-3 then changed its routing to ISP-2 and ISP-1. After I removed the packet filtering, however, ISP-3 pointed its traffic back to iMCI again. <BR><BR> I then decided to create something even more interesting. I designed a filter to let ISP-3 pass ICMP packets, traceroute packets, and DNS packets, but block all other IP packets, just to add some complexity to their troubleshooting. The filter was there for four days; apparently they could not pinpoint the problem, and finally switched traffic back to normal. <BR><BR> <B>Case #2:</B> Passing Third-Party Next hop This case was not flagged by the Netflow analysis, because the reverse path lookup did not fail. iMCI peered with ISP-4 at MAE-East, but not with ISP-5. ISP-4 not only passed our next hop to ISP-5, but also passed ISP-5\'s next hop directly to us. In this way, we exchanged traffic with ISP-5 directly. We could, of course, manually overwrite all the next hops to ISP-4\'s address, but we still could not stop ISP-4 from passing our next hop to ISP-5. After exchanging some email with ISP-4, they agreed to fix this. Some might believe that it was more efficient to pass traffic between third parties over multi-access media. However, I believed that peering was not just an engineering issue, but also a business issue. <BR><BR> <B>Case #3:</B> Pointing Default Some routers at the exchange points simply pointed default to others. One strange example was a router at MAE-East that pointed default to UUnet; the router name had a reverse DNS lookup of xxx.internetmci.net. UUnet therefore asked me if this was one of our routers. I knew that we registered mci.net and internetmci.com, but was not sure about internetmci.net. A couple of days later, the IXP router pointed default to us instead of to UUnet. Since they claimed to be MCI, we did an SNMP query to the router, and it returned: <BR><BR> ip.ipRouteTable.ipRouteEntry.ipRouteNext<BR> hop. + <BR><BR> The ip address was our cpe3\'s FDDI address at MAE-East. I also obtained the AS number of the router, and was able to find out who the router belonged to. After several email exchanges, the owner changed their default route so it did not point to us any more, but to someone else ;-) Somehow, they believed a router had to have a default route. </BLOCKQUOTE> <B>Other Activities</B> <BR><BR> Other issues were debatable. I found some ISPs running IGPs over the FDDI at the NAPs. When I sent the ISPs e-mail, they told me that this was due to a misconfiguration. One could argue this was a way to create redundancy at the NAPs. But usually an ISP\'s routers were at the same location, or on the same rack at the NAPs, so redundancy could be achieved by using a private LAN instead of bothering other routers. One or two routers/workstations on MAE-East were using native IP multicast. I calculated that about 2Mbps of this traffic was going over the NAP FDDI, but didn\'t know if any router on the LAN was receiving the traffic. I talked to the senders, who told me that they planned to shut down multicast on their box. <BR><BR> I spent a lot of time dealing with route consistency over different peering points. On the Internet today, we use shortest exit routing. If the routes we learned from our peers were not consistent across all the peering points, then we might carry some traffic unnecessarily across our backbone. Sometime we had to avoid some peering points because of severe congestion; this was usually done through mutual agreement. <BR><BR> To deal with those unusual activities, we need to detect them, communicate with the right people, and, sometimes, find a way to stop them. To detect problems, I was able to use: <UL> <LI> Netflow statistics for reverse route lookup </LI> <LI> Traceroute with \"-g\" switch </LI> <LI> If LSR is disabled on the other end, and you are trying to investigate how that router route certain prefixes, you can temporarily create a static route pointing to the router in question, and then trace to that address. If that router routes that trace packet back to you, then you will see the trace going back and forth between your router and the other router. </LI> <LI> MAC address accounting. <BR><BR></LI> </UL> Methods you can use to stop unwanted traffic include: <UL> <LI> Packet level filtering if the router CPU can handle it </LI> <LI> MAC address filtering/rate-limit, sometimes combined with WRED </LI> <LI> Filter out routes from the network if necessary <BR><BR></LI> </UL> Preventive practices include: <UL> <LI> NAP GIGAswitch L2 filtering Use next-hop-self and always overwrite the next hop to your peer\'s address. </LI> <LI> If you don\'t have a customer on the NAP routers, remove the non-customer routes from the NAP routers. This ensures that you only allow peers\' traffic to go to your customer destinations. </LI> <LI> Use the loopback address to do iBGP peering, so you don\'t have to carry NAP LAN address blocks over the network. </LI> <LI> Use ATM PVCs </LI> </UL>

View full abstract page.
  • Naiming Shen, Cisco Systems.
pptShen Presentation(PPT)
12:00pm - 1:30pm Lunch
1:30pm - 2:15pm Canada\'s National Optical InternetSpeakers:
  • Bill St. Arnaud, CANARIE.
youtubeCanada's National Optical Internet
2:15pm - 3:45pm 

Technology for Backbone Web Caching

Includes:<BR> <BR> Domestic Users of Caching <BR> <BR> Enabling Technologies

View full abstract page.
  • Peter Danzig, Network Appliance.
  • James Aviani, Cisco Systems.
  • Bill Maggs, MCI.
  • Shirish Sathaye, Alteon.
  • Ed Kern, DIGEX.
pptBill Maggs Presentation(PPT)
youtubeCaching Part 1
youtubeCaching Part 2
pptPeter Danzig Presentation(PPT)
pptShirish Sathaye Presentation(PPT)
3:45pm - 4:00pm Break
4:00pm - 5:25pm Multicast/MBGP: Real-Life Experiences in the FieldModerators:
  • Dave Meyer, Cisco Systems.
  • Danielle Deibler, DIGEX.
  • Mujahid Khan, Sprint.
  • Henry Kilmer, DIGEX.
  • Dorian Kim, Verio.
  • Jian Li, Qwest.
  • Doug Pasko, Cable & Wireless.
  • Steve Rubin, AboveNet.
  • Amir Tabdili, Sprint.
youtubeMulticast/MBGP Part 1
youtubeMulticast/MBGP Part 2
5:25pm - 5:30pm 

Planning for Multicast Address Allocation

A Plea for Input from the IETF Multicast-Address Allocation (MALLOC) Working Group

View full abstract page.
  • Dave Thaler, Microsoft.
5:30pm - 7:30pm Beer n Gear
  • Sponsors Juniper Networks; Digital Broadcast Network; CacheFlow; Aptis Communications .
  • Sponsors
  • 7:30pm - 9:00pm 

    Data Center Needs, Problems, and Technology

    <B><I>Scribe Notes</B></I> <BR><BR> (The group (about 120 people) met from 7:30-9:00PM Monday evening to discuss the issues folks have run into constructing and operating Internet Data Centers. It seemed like about a dozen or so in addition to the panel had constructed or were installing Internet Data Centers and shared their experiences in the finest tradition of a BOF. Many others contributed observations and suggestions as well based on their experiences with Internet facilities. I\'ve tried to capture the nature of the discussion from these notes. After we ran out of time many folks adjourned to a room upstairs and continued expanding the notes a bit in smaller focus groups. I wasn\'t involved in all of these discussions and folks added stuff to the lists. I\'ve tried to capture these additions as well. ) <BR><BR><BR>I\'ve included two appendices: Michael P. Lucking contributed a worksheet for calculating BTU cooling units required for data centers, and I added the panels conference call notes highlighting discussion points that we may not have covered during the BOF. <BR><BR><BR> <B><I>Format of this document</B></I> <BR><BR> We came up with a list of Internet Data Center Issues and came up with recommendation based on experiences of the group. The intent was to share information about critical infrastructure support and not necessarily cover all aspects of data center construction. The hope is that this document will help folks know what to look for in stable infrastructure in which to put our Internet equipment. <BR><BR><BR> <B><I>Power</B></I> <BR><BR> Big issue here is grounding for which there are military specs (source?) <BR><BR><BR>Internet Data Centers are moving to requiring both AC and DC. There was some debate over the relative benefits of AC (that\'s all most ISPs know) and DC (cleaner more consistent). <BR><BR><BR>Security of the grounding system itself. One can tap into the data system through grounding system. <BR><BR><BR>Resources: IEEE Emerald Book which describes data and electrical grounding, and the Green Book which goes into commercial grounding, and NEBS Standards for office space delivery. <BR><BR><BR>There is a multi-point vs. single point grounding tradeoff. The recommendation is to use multi-drop for simplicity of engineering reasons. <BR><BR><BR>Recommendation for a telephone, electrical outlet and flashlight, telescoping dental mirror, and headtop mining lights in the power room. Make sure you have in stock extra brass (melting) fuses (100Amp, 300Amp, 600Amp bars). <BR><BR><BR>Have EPO Panic button with cover <BR><BR><BR>2n/n+1 redundancy required for all critical facilities <BR><BR><BR>Consider stronger than expected power for facilities staff workstations. <BR><BR><BR>Emergency power off is part of the code. There is a code difference between a facilitiy called a &quot;Computer Room&quot; and other names. The suggestion was that you may be better off using another name because the codes for a computer room may require non-optimal implementations for Internet Facilities. <BR><BR><BR>Power conditioning required. The noise and frequency of serious spikes varies and requires you to condition the lines. <BR><BR><BR>Checking on power shouldn\'t require power <BR><BR><BR>Backups UPS/Generator Power for the facility dictates how long humans can inhabit the facility and therefore determines the life of the facility. Security systems need to be on the same protected power system as the rest of the facility. <BR><BR><BR>Redundant power through multiple grids is non-trivial to do but highly desirable. Experience has shown that Internet facilities typically have A/B Buss for power and 30Amps delivered to each rack. <BR><BR><BR>Some recommendation for data and power lines to be on different planes. Power and data overhead needs to be well supported (weight concern) and in seismic areas well braced. <BR><BR><BR>A recommendation was made for fiber-impregnated batteries to avoid acid on the floot. <BR><BR><BR>Most importantly, all Internet Facilities must have generators for surviving long-term power outages (especially as we go through competitive deregulation in the power industry…) Diesel fuel may raise EPA issues in some areas. Diesel fuel contamination is an issue and a solution that some have includes multiple fuel tanks. Contracts for trucking in additional fuel is suggested, however it is also pointed out that during this type of crisis, other (emergency support, police, hospital, etc.) facilities may get priority service, and indeed your supplier may not have the robustness required. Fire drills here may help. Dual contracts for fuel delivery was also suggested. <BR><BR><BR> <B><I>HVAC</B></I> <BR><BR> The cooling needs are unique to Internet Data Centers. The amount of heat generated by some of this equipment is extreme and highly variable in our industry. <BR><BR><BR>Chilled water cooling was recommended for equipment and heating/cooling accomodations for emergency staff <BR><BR><BR>Rule of thumb is KW/20=t where t is the number of tons of cooling required. <BR><BR><BR>Tradeoffs include raised floors versus relay racks <BR><BR><BR>Important issues include redundancy for HVAC systems <BR><BR><BR>Load shedding protocol - when to turn off monitors in data center? <BR><BR><BR>Monitor HVAC system <BR><BR><BR>Humidity - over cooling causes condensation on equipment, too dry leads to excessive static <BR><BR><BR>Watch for hotspots and cold spots. One exchange is frigid cold under vent and still hot elsewhere. <BR><BR><BR>Use clean/soft/distilled water <BR><BR><BR>Failure mode for HVAC should be open/full blast AC <BR><BR><BR>Separate venting for Internet Data Center than for broader building. <BR><BR><BR>All AC should be on same power system as the rest of the data center <BR><BR><BR>Water pipes/drain pipes above facilities raises concerns. Upper floor drains clogging as well. One solution was rubberized floors above data center. <BR><BR><BR>One concern was that the facility is volatile organic compounds (Radon, etc.). An approach would be to specify X number of air changes per hour. 85% filtration of outside air was recommended. <BR><BR><BR>Problem resolution of HVAC then needs to be pulled into the NOC in some way… <BR><BR><BR>Standing water is bad - condensation trays and sewer backup system stories. <BR><BR><BR>Timely A/C maintenance is required. <BR><BR> <B><I>Fire Suppression</B></I> <BR><BR> Pre action sprinklers - require air in pipe prior to water <BR><BR>Two stage systems are recommended that trigger only when two zone sensors go off. <BR><BR>Trigger in different zones as needed <BR><BR>Highly desirable to have a standby switch to abort firing of the system. <BR><BR><BR>In some areas trigger needs to also cut power... <BR><BR>Fire Drills required prior to opening Internet Data Center. This may be expensive. In some area, 2 firings is required separated by 20 minute ventilation time. <BR><BR>FM200 system is common as a replacement for Halon, water and chemical systems. <BR><BR>Heat sensors and smoke sensors are require. <BR><BR>Disaster recovery issues <BR><BR>Drains <BR><BR>Drills and training of staff <BR><BR>Recommendation to talk with local fire company <BR><BR>Make sure fire alarm causes security system failure is full open for fire department <BR><BR>Need manual forced dump (this was added upstairs and I don\'t know what is meant here.) <BR><BR>Need many large extinguishers <BR><BR><BR> <B><I>Physical Security</B></I> <BR><BR> Recommended that walls stop at concrete and their should be no raised ceilings <BR><BR><BR>Cellular phone for alarm and monitoring system <BR><BR><BR>Battery backups for entry access system - how long should the facility stay up after the loss of power? &gt; 6hrs required. <BR><BR><BR>Fire proof high stress shatterproof glass <BR><BR><BR>Pizza PO to prevent engineer riots, Coffee sushi &amp; espresso in emergency facilities <BR><BR><BR>Fire alarm defeats security system for safety reasons <BR><BR><BR>Keyed (personal) biometric access control &amp; off-site loggin <BR><BR><BR>Air lock - one at a time - access. No piggy back entry. <BR><BR><BR>Motion detectors <BR><BR><BR>&nbsp; <B><I>Cable Management</B></I> <BR><BR> tie wrap - not too tight or velcro screwed down <BR><BR>Fiber - square open face conduit <BR><BR>Horizontal fiber management units <BR><BR>In-rack cable tie-down panels <BR><BR>Bundles and patch panels (e.g. 50 pair or 100 pair copper, 24 or 48 strand fiber - single and multimode.) <BR><BR><BR> <B><I>Data Plant</B></I> <BR><BR> T1 cabling <BR><BR>Cat 5 cabling <BR><BR>T3 distribution - no crimp connectors! <BR><BR>Hierarchical - T1/T3/ther/Finber <BR><BR>Patch Panels <BR><BR>Star or modified star topologies <BR><BR>DACS - CNR <BR><BR>T1 in-line testing <BR><BR>DSX Panels <BR><BR>Optical splitter/monitor patch fees. <BR><BR><BR>&nbsp; <B><I>Fire Drills</B></I> Importance - some test facilities weekly. Alarms going off, etc. <BR><BR> <B><I>General Issue</B></I> <BR><BR> Across all topics there is a Y2K issue. <BR><BR> <B><I>Layer 1 Recommendations</B></I> <BR><BR> Use of 506 category <BR><BR>We discussed the importance of fiber entrance diversity, that is, the ability for multiple carriers to enter the facilities through different paths. The use of existing vaults, tunnels, shopping malls, and wireless were mechanisms to accomplish this. <BR><BR><BR>We also pointed to the importance of locating data centers along telco fiber meet paths. This would make multiple carrier ingress into the facilities easier. The issue then becomes &quot;How do we find out where the carriers lay their fiber so we can pick a good location for the Internet Data Center? Suggestions included by looking at building permits which are public records, asking install crews on the street doing an install, and simply asking the telco (maybe requiring NDA, previous relationships, and perhaps \$$ volume) <BR><BR><BR> <B><I> Voice Communication</B></I> <BR><BR> We didn\'t get to discuss this. <BR><BR><BR> <B><I>Flooring &amp; Ceiling</B></I> <BR><BR> We didn\'t get to discuss this. <BR><BR> <B><I>Where to find Information on Data Centers? </B></I> <BR><BR> Discussions with CLECs and Vendors <BR><BR>Requirements lists for Heating, Venting, Air Conditioning and responses from construction responses <BR><BR>Banks and Military and Financial instituions already have fairly robust generic data center specs (where?) <BR><BR>Telcos use Bellcore docs - describing wiring standards (how to get these?) <BR><BR><BR>Data Center Mailing List: <BR><BR>send a message to [email protected] with \"subscribe datacenter\" in the body. <BR><BR><BR> Quote of the Day <BR><BR> &quot;Build your data center next to someone who needs it more than you do.&quot; <BR><BR><BR>&nbsp; <BR><BR><BR>&nbsp; <BR><BR>&nbsp; <BR><BR><BR><B>Appendix A - Calculating BTU Cooling Units</B> <BR><BR><BR>This spreadsheet contributed by \"Michael P. Lucking\" &lt;[email protected]&gt; <BR><BR><BR>All temps are in degrees F. <BR><BR>1) Windows exposed to the sun: <BR><BR><BR>Use only one exposrue: Select the one that gives the largest result <BR><BR>if no venetian or shading device is available mult X 1.4 <BR><BR><BR>1.1) South ____ sq. ft x (Max outside temp - 30) = _____ BTU/HR <BR><BR>1.2) E/W/SE ____ sq. ft x (Max outside temp - 3) = _____ BTU/HR <BR><BR>1.3) NW ____ sq. ft x (Max outside temp - 23) = _____ BTU/HR <BR><BR>1.4) NE ____ sq. ft x (Max outside temp - 25) = _____ BTU/HR <BR><BR>1.5) N ____ sq. ft x (Max outside temp - 85) = _____ BTU/HR <BR><BR><BR>Answer for #1 MAX (1.1 - 1.5) <BR><BR><BR>2) All Windows not included in Item 1 (interior windows etc) <BR><BR>___ sq. ft x (Max exposure temp - 69) = _____ BTU/HR <BR><BR><BR>3) Walls exposed to sun <BR><BR>(Use only the wall with the eposure used in item 1) <BR><BR><BR>3.1) Light Construction ____ Lin. ft x (Max outside temp - 25) = _____ BTU/HR <BR><BR>3.2) Heavy Construction ____ Lin. ft x (Max outside temp - 55) = _____ BTU/HR <BR><BR> <BR><BR>Heavy defined at 12\" masonry or insulation. <BR><BR><BR>4) Shade Walls not included in Item 3 <BR><BR> ____ Lin. ft x (Max outside temp - 55) = _____ BTU/HR <BR><BR><BR>5) Partitions <BR><BR> (Interior Wall adjacent to an unconditioned space) <BR><BR>____ Lin Ft x (Max temp - 50) = _____ BTU/HR <BR><BR><BR>&nbsp; <BR><BR>6) Ceiling or roof <BR><BR><BR>6.1) Ceiling with Unconditioned occupied space above <BR><BR>___ Lin Ft x (Max temp - 90) = _____ BTU/HR <BR><BR> <BR><BR>6.2) Ceiling with Attic Space above <BR><BR>6.2.1) No Insulation ___ Sq Ft x (max temp - 83) = _____ BTU/HR <BR><BR>6.2.2) 2\" or more ___ Sq Ft x (max temp - 90) = _____ BTU/HR <BR><BR>6.3) Flat roof with no ceiling below <BR><BR>6.3.1) No Insulation ___ Sq Ft x (max temp - 85) = _____ BTU/HR <BR><BR>6.3.1) 2\" or more ___ Sq Ft x (max temp - 90) = _____ BTU/HR <BR><BR><BR>7) Floor <BR><BR> (over unconditioned space or venter crawl space. ignore any heat <BR><BR> gain from floor directly on ground or over unheated basement) <BR><BR><BR>____ sq. ft x (Max temp - 90) = _____ BTU/HR <BR><BR><BR>8) People (includes allowance for ventilation) <BR><BR><BR>____ x 750 = _____ BTU/HR <BR><BR><BR>9) Lights (if total wattage is know use 9.1 else 9.2) <BR><BR>9.1) ____ Watts x 4.25 <BR><BR>9.2) ____ (sq ft floor space x 3 )Watts x 4.25 <BR><BR><BR>10) Computer load (some computers/routers acctually supply BTU/HR ratings <BR><BR>else you will have to calculate it) <BR><BR>10.1) Total BTU/HR for all machines <BR><BR>10.2) Total max wattage x 3.4 = _______ BTU/HR <BR><BR><BR>______________________ <BR><BR>Sum all the BTU/HR, this is your total load factor. <BR><BR><BR>&nbsp; <B><BR><BR><BR>Appendix B - NANOG 14 Data Center BOF Pre-Meeting Conference Call for Discussion Points </B><BR><BR><BR>&#9;&#9;&#9;Data Center Needs, Problems and Technologies <BR><BR>&#9;&#9;&#9;NANOG 14 BOF 7:30-9PM <BR><BR><BR>On Conference Call: <BR><BR>Bill Norton &lt;[email protected]&gt; <BR><BR>Sean Donelan &lt;[email protected]&gt; <BR><BR>Jay Adelson &lt;[email protected]&gt; <BR><BR>Juston Newton &lt;[email protected]&gt; <BR><BR><BR>Abstract: <BR><BR>Internet facilities need to grow more robust to meet the needs of today\'s networking environment. Outages due to air conditioning problems, accidental circuit pulls, and shortages of space and bandwidth at collocation and exchange facilities all lead to reliability issues that now affect millions of users worldwide. This BOF is intended to highlight concerns and technology problems that affect the robustness of the Internet Data Center facilities. We hope that specific recommendations on areas for improvement in infrastructure can be made that the community can adopt when constructing or improving infrastructure facilities. <BR><BR><BR>We discussed a few approaches for this BOF and agreed that the BOF should be informal, with perhaps a welcome and brief introduction of the panel by Bill, and a discussion of Internet Facilities Issues. <BR><BR><BR>Discussion Points <BR><BR>----------------- <BR><BR>Confidentiality Issue - so many are hiding mistakes, as an industry we are failing to effectively learn from others mistakes. This BOF is a sharing forum. <BR><BR><BR>Hollywood vs. Reality - apprearance of reliable production data centers often are much duller than what folks may expect, focusing on function vs. form. <BR><BR><BR>Biggest Problems Facing Internet Facilities - Managing Growth. <BR><BR>Cable Management - keeping track of # of cables, what goes where, the decision to leave a cable or pull it, how many cables to pre-install. Which exchange point do you think can best track down a wire failure? Sophisticated SW for doing this stuff exists. <BR><BR><BR>No Internet Standards for facilities. <BR><BR>BellCore classes exist for TelCo <BR><BR>BICSI - Building Industry Cable Standard Institute defines standards for # of data ports for an office building for example (TIA568). <BR><BR><BR>Data Industry - Not Invented Here Syndrome. All Internet Engineers have a religious view about the \"right\" way for a closet to look. We need something like a common practices resource for hub/cable layout. <BR><BR><BR>Phone companies have folks dedicated to nothing but cable plant installs. Don\'t let folks do their own wiring. <BR><BR><BR>Power in Data Centers - Planning is difficult. Planning for power maintenance. Example: Single panel for power feed and UPS feed; panel can not be serviced without losing both UPS and power feed. <BR><BR><BR>Some Resources on Power may be available for our community: <BR><BR>NEBS stds on power <BR><BR>Military standards on power <BR><BR>Hospital standards on power <BR><BR>Service Requirements for \"Critical Services\" - recover in &lt; 6 seconds <BR><BR>Service Requirements for \"Essential Services\" - recover in &lt; 10 minutes <BR><BR>Service Requirements for \"Routine Services\" - recover in &lt; 2 hours <BR><BR><BR>Design of Data Centers - who is qualified? Any Internet Engineer? What expertise is required to build a commercial grade internet facility? There is no \"Certified Data Center Consultant\", no good stds or common practices docs. <BR><BR>Jay brought up his current experience of hiring a construction company that has on staff folks from large data center (IBM, etc) crews. Expertise includes experience on 100% availability services provisioning: hospital, police stations, financial markets, military. <BR><BR><BR>Testing Internet Facilities and Fire Drills <BR><BR><BR>Power Factor Discussion <BR><BR><BR>Internet World rapidly deploying and adding UPS as afterthought. UPS=power conditioning, where Electrical Engineers would focus on building from the ground up \"Power Redundancy\". <BR><BR><BR>Jay can talk about Trade Offs in building facilities. Building on a budget, expansion critieria. <BR><BR><BR>Broader issue - how to build quality Internet Facilities into existing office space? Can it be done? <BR><BR><BR>Comments/Additions welcome... <BR><BR><BR>Bill

    View full abstract page.
    • Sean Donelan, Data Research Associates.
    • Bill Norton, Equinix.
    9:00pm - 10:30pm Caching in Today\'s Marketplace: Vendor Approaches and SolutionsSpeakers:
    • Duane Wessels, University of California, San Diego.
    Tuesday, November 10 1998
    Time/Webcast:Room:Topic/Abstract:Presenter/Sponsor:Presentation Files:
    9:00am - 10:30am Updates from the Internet2 GigaPoPsSpeakers:
    • Stan Barber, Texas GigaPoP.
    • Ron Hutchins, Southern CrossRoads, Atlanta GigaPoP.
    • Mark Johnson, North Carolina Networking Initiative.
    • Dave Meyer, Oregon GigaPoP.
    pptMark Johnson Presentation(PPT)
    youtubeUpdates from the Internet2 GigaPoPs
    10:30am - 10:45am Break
    10:45am - 11:00am ARIN UpdateSpeakers:
    • Kim Hubbard, ARIN.
    pptARIN Update(PPT)
    11:00am - 12:00pm InterNIC UpdateSpeakers:
    • Mark Kosters, InterNIC.
    youtubeInterNIC Update PART 1
    youtubeInterNIC Update PART 2
    12:00pm - 1:30pm Lunch
    1:30pm - 2:00pm 

    CAR Talk: Configuration Considerations for Cisco\'s Committed Access Rate)

    Some months ago @Home began evaluating CAR for some of the functionality that we required. We evaluated solely CAR\'s rate limiting capabilities, and the extent to which rate limiting impacts the network. In the process we discovered how CAR interacts with TCP, as well as the optimum configuration of burst parameters.

    View full abstract page.
    • Cathy Wittbrodt, @Home.
    pptCAR Talk(PPT)
    youtubeCAR Talk: Configuration Considerations for Cisco's Committed Access Rate)
    2:00pm - 4:00pm Router Update: Fast Packet Forwarding and Packet Treatment for Differentiated ServicesModerators:
    • Curtis Villamizer, ANS.
    • Paul Ferguson, Cisco Systems.
    • John Stewart, Juniper.
    • Jeff Wabik, Netstar/Ascend.
    • Hank Zannini, Avici.
    • Steve Willis, Argon.
    4:00pm - 4:15pm Closing RemarksSpeakers:
    • Craig Labovitz, Merit Network.


    ^ Back to Top