LINX

Document Sample
LINX Powered By Docstoc
					Extending the Internet
    Exchange to
the Metropolitan Area
      Keith Mitchell
         keith@linx.org
      Executive Chairman
   London Internet Exchange
       ISPcon, London
      23rd February 1999
   Mostly a Case Study

• Background
• IXP Architectures
  & Technology
• LINX Growth Issues
• New LINX Switches
• LINX Second Site
      What is the LINX ?
• UK National IXP
• Not-for-profit co-operative of ISPs
• Main aim to keep UK domestic
  Internet traffic in UK
• Increasingly keeping EU traffic in
  EU
• Largest IXP in Europe
           LINX Status
• Established Oct 94 by
  5 member ISPs
• Now has 63 members
• 7 FTE dedicated staff
• Sub-contracts co-location to 2 neutral
  sites in London Docklands:
  • Telehouse
  • TeleCity
• Traffic doubling every ~4 months !
                 LINX Members
AT & T                      Frontier Technology   Oleane
ANS UK                      GlobalCenter          Onyx
Atlas                       GlobalOne             Planet Online
BT Internet Services        Graphnet              PSI UK
Cable & Wireless            GTS (Sovam)           RedNet
TeleWest (Cable Internet)   GX Networks (Xara)    QUZA
Carrier1                    HighwayOne            Technocom
Cerbernet                   IBM Global Network    Tele Danmark
Claranet                    ICL (ECRC)            Teleglobe
COLT                        INSnet                Telia
Compuserve                  IPf                   U-Net Internet
Demon Internet Services     Ireland Online        UUNET UK
Deutsche Telekom            mediaWays             UKERNA (JANET)
DIALnet                     Mistral               VASnet
Direct Connection           Nacamar               VBCnet
Easynet                     NETCOM                WireHub!
Esat Net                    NetKonect             Wisper Bandwidth
EuroNet                     Nildram               XTML
Exodus                      NTL Internet          Zoo Corporation
Freedom 2 Surf
     LINX Members
       by Country
     3   1 1 1 1 1 1
5



                                 33
14



UK         COM/US      DE   IE
SE         CA          FR   RU
DK         EU/CH
  Exchange Point History
• Initially established in 1992 by:
  • MFS, Washington DC - “MAE-East”
  • Commercial Internet Exchange,
    Silicon Valley - “CIX-West”
• Amsterdam, Stockholm, others
  soon afterwards
• Now at least one in every
  European, G8, OECD etc country
        IXP Architectures
• Initially:
   • 10baseT router to switch
   • FDDI between switches
   • commonly DEC Gigaswitches
• More recently:
   • 100baseT between routers and
     switches
   • Cisco Catalyst 5000 popular
       IXP Technologies

•   10Mbps Ethernet
•   100Mbps Ethernet
•   FDDI
•   ATM
•   Gigabit Ethernet
     IXP Technologies -
          Ethernet
• 10baseT is only really an option
  for small members with 1 or 2 E1
  circuits and no servers at IXP site
• all speeds of Ethernet will be
  present in ISP backbones for
  servers for some time to come
     IXP Technologies -
          100baseT
• Cheap
• Proven
• Supports full duplex
• Meets most non-US ISP switch
     port bandwidth requirements
• Range limitations can be
  overcome using 100baseFL
       IXP Technologies -
             FDDI
•   Proven
•   Bigger 4k MTU
•   Dual-attached more resilient
•   Longer maximum distance
•   Full-duplex proprietary only
      IXP Technologies -
             ATM
• Only used at US federally-
  sponsored NAPs, PARIX
  • Ameritech, PacBell, Sprint, Worldcom; FT
• Initially serious deployment
  problems
  • “packet-shredding” led to poor
    bandwidth efficiency
• Now >1Gbps traffic at NAPs
         IXP Technologies -
                ATM
• Some advantages:
  • inter-member bandwidth limits
  • inter-member bandwidth measurement
  • “hard” enforcement of peering policy
    restrictions
• But:
  • High per-port cost, especially for
    >155Mbps
  • Limited track record for IXP applications
     IXP Technologies -
      Gigabit Ethernet
• Cost-effective and simple high
  bandwidth
• Ideal to scale inter-switch links
• Not good router vendor support
  yet
• Standards very new
• Highly promising for metropolitan
  and even longer distance links
      LINX Architecture
• Originally Cisco Catalyst 1200s:
  • 10baseT to member routers
  • FDDI ring between switches
• Until 98Q3:
  • Member primary connections by
    FDDI and 100baseT
  • Backup connections by 10baseT
  • FDDI and 100baseT inter-switch
Old LINX Topology
     Old LINX Infrastructure
• 5 Cisco Switches:
    • 2 x Catalyst 5000, 3 x Catalyst 1200
• 2 Plaintree switches
    • 2 x WaveSwitch 4800
•   FDDI backbone
•   Switched FDDI ports
•   10baseT & 100baseT ports
•   Media convertors for fibre ether
    (>100m)
        Growth Issues
• Lack of space for new members
• Exponential traffic growth
• Bottleneck in inter-switch links
• Needed to upgrade to Gigabit
  backbone within existing site
  98Q3
• Nx100Mbps trunking does not
  scale (MAE problems)
Statistics and looking glass at http://www2.linx.net/
         Switch Issues
• Catalyst and Plaintree switches no
  longer in use
  • Catalyst 5000s appeared to have
    broadcast scaling issues regardless
    of Supervisor Engine
  • FDDI could no longer cope
  • Plaintree switches had proven too
    unstable and unmanageable
  • Catalyst 1200s at end of useful life
  LINX Growth Solutions
• Find second site within 5km
  Gigabit Ethernet range via open
  tender
• Secure diverse dark/dim fibre
  between sites from carriers
• Upgrade switches to support
  Gigabit links between them
• Do not offer Gigabit member
  connections yet
   LINX Growth Obstacles
• Existing Telehouse site full until
  99Q3 extension ready
• Poor response to Q4 97 site ITT:
  • only 3 serious bidders
  • successful bidder pulled out after
    messing us around for 6 months :-(
• Only two carriers were prepared
  and able to offer dark/dim fibre
  after months of discussions
  Gigabit Switch Options
• Evaluated 6 vendors:
  • Cabletron/Digital, Cisco, Extreme,
    Foundry, Packet Engines, Plaintree
• Some highly cost-effective options
  available
• But needed non-blocking,
  modular, future-proof equipment,
  not workgroup boxes
         Metro Gigabit
• No real MAN-distance fibre to test
  kit out on :-(
• LINX member COLT kindly lent us
  a “big drum of fibre”
• Most kit appears to work to at
  least 5km
• Some interoperability issues with
  dim to dark management
  convertor boxes
              Telehouse
• Located in London Docklands
  • on meridian line at 0º longitude !
• 24x7 manned, controlled access
• Highly resilient infrastructure
• Diverse SDH fibre from most UK
  carriers
• Diverse power from national grid,
  multiple generators
• Owned by consortium of Japanese
  banks, KDD, BT
    LINX and Telehouse
• Telehouse is “co-locate” provider
  • computer and telecoms “hotel”
• LINX is customer
• About 100 ISPs are customers,
  including 50 LINX members
  • other members get space from LINX
• Facilitates LAN interconnection
         LINX 2nd Site
• Secured good deal with two
  carriers for diverse fibre
  • but only because LINX is special
    case
• New ITT:
  • bid deadline mid-Aug 98
  • 8 submissions
• Awarded to TeleCity Sep 98
      LINX and TeleCity
• TeleCity is new VC co-lo startup
  • sites in Manchester, London
  • London site 3 miles from Telehouse
• Same LINX relationship as
  Telehouse
  • choice for members
• Space for 800 customer racks
• LINX has 16-rack suite
        New Infrastructure
• Packet Engines PR-5200
    • Chassis based 16 slot switch
    • Non-blocking 52Gbps backplane
    • Used for our core, primary switches
    • One in Telehouse, one in TeleCity
    • Will need a second one in Telehouse
      within this quarter
    • Supports 1000LX, 1000SX, FDDI and
      10/100 ethernet
     New Infrastructure
• Packet Engines PR-1000:
    • Small version of PR-5200
    • 1U switch; 2x SX and 20x 10/100
    • Same chipset as 5200
• Extreme Summit 48:
    • Used for second connections
    • Gives vendor resiliency
    • Excellent edge switch -
      low cost per port and
    • 2x Gigabit, 48x 10/100 ethernet
     New Infrastructure
• Topology changes:
  • Aim to be able to have major failure
    in one switch without affecting
    member connectivity
  • Aim to have major failures on inter-
    switch links with out affecting
    connectivity
  • Ensure that inter-switch connections
    are not bottlenecks
        New backbone
• All primary inter-switch links are
  now gigabit
• New kit on order to ensure that all
  inter-switch links are gigabit
• Inter-switch traffic minimised by
  keeping all primary and all backup
  traffic on their own switches
        Current Status
• Old switches no longer in use
• New Switches live since Dec 98
• TeleCity site has been running
  since Dec 98
• First in-service member
  connections at TeleCity soon
• Capacity for up to 100x traffic
  growth
     IXP Switch Futures
• Vendor claims of 1000baseProprietary
  50km+ range are interesting
• Need abuse prevention tools:
   • port filtering, RMON
• Need traffic control tools:
   • member/member bandwidth limiting
     and measurement
• What inter-switch technology will
  support Gigabit member connections ?
          Conclusions
• Extending Gigabit beyond your
  LAN is hard, but not technically
• Only worth trying if you have your
  own fibre
• Some London carriers are
  meeting the challenge of providing
  dark fibre
  • now 4-5 will do this
Contact Information

 •   http://www.linx.net/
 •   info@linx.org
 •   Tel +44 1733 705000
 •   Fax +44 1733 353929

				
DOCUMENT INFO