Learning Center
Plans & pricing Sign in
Sign Out

Slide 1 - Mid-Atlantic Crossroads


  • pg 1
									What’s MAX Production Been Up to?
Presentation to MAX membership

Fall 07 Member Meeting

 Dan Magorian
  Director of
 Engineering &
 What has the Production Side of MAX been
  doing since the Spring member meeting?
• The last 6 months has been one of the biggest network
  changeovers in MAX’s history! (“The Big Move”)
   –   We interviewed many potential dwdm system vendors.
   –   Did a pseudo-RFP process with UMD procurement helping.
   –   Worked with our technical advisory committee (TAC, thanks guys!)
   –   Took field trips to the short-list vendors sites for lab testing.
   –   Selected a vendor (Fujitsu Flashwave 7500).
   –   Got the PO expedited through UMD Purchasing in record time.
   –   Got delivery expedited through Fuijitsu.
   –   Installed in MAX lab, configured, and out to field in one month.
   –   Inc lot of prep, ripping out Movaz dwdm systems, customer
       coordination, and cutover in 3 main pops.
• Phase 1 of Fujitsu Flashwave 7500 system has been
  selected, procured, and installed in record time!!
Example: Baltimore DWDM installation timetable as of July.
    Slipped about a month, was still very aggressive.

•   7/27 Parts arrived, breakers changed, Fujitsu 7500 PO cut.
•   8/3 Norwin and DaveI move Force10, install MRV 10G
•   8/3 Dan and MAX folks have 10G lambda Mclean ready
•   8/10 Dan and MAX engineers at Fujitsu training TX
•   8/17 Balt peerings moved to Force10s/T640s
•   8/17 Bookham filters arrived, Aegis power mons installed.
•   8/24 M40e, Dell, 6 SP colo rack 3 cleared.
•   8/24 (depends on Fujitsu ship) Fujitsu gear staged in lab
•   8/31 Fujitsu DWDM and switch installed in colo rack 3
•   9/7-14 Move participant peerings to lambdas on Bookhams
•   9/21 Mop up of Aegis power monitor mrtgs, etc
            Where are we today vs April?
  3 main pops in Mclean (LVL3), College Park, and 6 St
  Paul Baltimore almost complete.
• This included a major move of our main UMD pop:
   –   into the NWMD colo area of UMD bldg 224 room 0302
   –   moving out of OIT colo area of UMD 224 room 0312.
   –   involved renovating the racks, moving over MD DHMH gear
   –   tie fibers, unfortunately coupled with huge fiber contractor hassles
   –   NWMD folks (Greg and Tim) were very helpful.
• Still to be done:
   – tie fiber in 6 St Paul (do it ourselves next week with our new
     Sumitomo bulk fusion splicer)
   – Finish up of BERnet dwdm filter cutovers
   – Phase 2 dwdm, replacing ancient 2000 Luxn dwdm DC ring.
• Very proud that we moved all the customer and backbone
  lambdas with only tiny amounts of downtime for the cuts!
• Especially want to thank Quang, Dave, Matt, and Chris!
In addition to the dwdm changeover, the other pop
    moves have been a huge piece of work also
• In Mclean, had to move the HOPI rack to Internet2’s suite
• In Baltimore, we’re sharing USM’s Force10 switches, and
  removed the BERnet Juniper M40e. Lot of cutover work.
• Moved the NGIX/E east coast Fednet peer point
   –   Procured, tested in lab, and installed in new CLPK pop.
   –   Including a lot of RMA problems with 10G ints.
   –   Lot of jumper work, config moves and night cuts to get done.
   –   Monday just moved out CLPK Dell customers.
   –   Next up: moving the lab T640 to the new CLPK pop, new jumpers,
       config move and consolidation.
• We had intended to have new dense 1U Force10 or
  Foundry switches selected and installed
   – But found that their OSes were immature/unstable.
   – Had to do an initial one in Equinix pop to support new 10G link.
   – So made decision to consolidate onto Cisco 6509s for Phase 1,
     postpone Phase 2 switches till spring 08.
RR402 Before: 48V PDUs, Dell switch and inverters, Force10 (top), Juniper M40e (bot)
RR402 After: Fujitsu ROADM optical
shelf (top), transponder 1-16 shelf (bot)
with 2 10G lambdas installed, Cisco 2811
“out-of-band” DCC router with console
cables, Fujitsu XG2000 “color translator”
XFP switch. Still to be installed:
transponder 17-32 shelf to hold space.
RR202 after: Force10 E300 relocated
(top, still needs to be moved up), 3
Bookham 40ch filters, Aegis dwdm
power monitor, tie fiber panel to RR402
MAX Production topology Spr 07
   1. Original Zhone dwdm over Qwest fiber       BALT           M40E
   2. Movaz dwdm over State Md fiber
   3. Gige on HHMI dwdm over Abovenet fiber      Ring
   4. Univ Sys Md MRV dwdm, various fiber         4
                                                            R&E Nets
 National Internet2                              CLPK
LambdaRail NewNet
            T640       MCLN       2       CLPK      T640

                                                  Ring           ISP
                Ring          Cogent      DCGW           DCNE
                 3            ISP                  1

              ASHB                               ARLG       Old Abilene
MAX Production topology Fall/07
   1. Original Zhone dwdm over Qwest fiber       6 St Paul      660 RW
   2. Fujitsu dwdm over State Md fiber
                                                   New        Prod
   3. 10G on HHMI dwdm over Abovenet fiber
                                                   Res        Ring 4
   4. 10G on Univ Sys Md, MRV dwdm                 fiber   10G lambda
                                                                        R&E Nets
 National Internet2                                   CLPK
LambdaRail NewNet
                                  Ring 2
            T640         MCLN              CLPK              T640
                                10G backbone
            10G lambda                                                     ISP
               Ring 3           Cogent                 Ring 1
                                               DCGW              DCNE

              ASHB                                    ARLG              Old Abilene
    BERnet client-side DWDM approach

           JHMI                      MIT 300 Lex.               JHU

40 wavelength
Client Side Path

        UMBC                    660 Redwood               6 St. Paul

                   One Transponder Pair to Pay for                New 40
                   and Provision End to End                       wavelength
                                                                  Fujitsu dwdm

  MCLN NLR & I2                                  College Park

        (UMBC is connected 6 SP, Also Sailor and Morgan joined)
More “Client Dwdm” examples between participants
                  XFP pairs on an assigned wavelengths on 40
                  channel dwdm filters (40 km reach $6K). Red
 JHU switch       lambda is to DC, blue to NYC, green local to UMBC

                                                      6 St. Paul

                  660 Redwood                    DWDM

                                                DWDM         DWDM

      DWDM                                                   DWDM

300 W Lexington                                        UMBC

                   All each participant fiber needs is filter pair,
                   not full dwdm chassis
              BERnet Production L1/L2 topology as of Nov

   BERnet              USM F10              USM F10           BERnet
   Participants       Redwood St            6 St Paul         Participants

  Xconnect                                                    Xconnect
                         USM                  USM
  New BERnet                                                  New BERnet
                         MRV                  MRV
  Res DWDM                                                    Res DWDM
Participant Production lambdas
                                                        New 10G lambdas
                  BERnet Production
                  10G lambda
MAX Fujitsu                                   USM
  Dwdm                                        MRV

 New MAX                                      MAX
   6509           MCLN T640                   6509           CLPK T640
  MCLN                                        CLPK

                                      Phase 2 DC ring
          Next Steps for the Big Move
• NGIX 6509 chassis just freed up moves next week to
  MCLN installation with connecting 10G lambda. This is the
  start of MAX’s Layer 2 service offering.
• USM finishing optical work on 660-6SP MRV dwdm link.
• Will put in BALT production 10G to MCLN, allows protected
  double peerings with MAX T640s.
• 40 channel filter installs: 6 SP/660 RW ends in (except
  Sailor), need to install/test participant ends, transition fibers
  from 1310 to dwdm: 660-6SP, JHU/JHMI, UMBC, Sailor,
  Morgan. Also Pat Gary’s group at CLPK. Then bring up
  sfp/xfp lambdas on, set up Aegis power mons /web pages.
• Move of CLPK Juniper T640 to new pop next: big one.
• Hope to have all pops move done by end of Dec/early Jan.
  Happy to give tours!
                Phase 2 of dwdm system

• In spring will continue to unify new Fujitsu dwdm system.
• Ironic:
    –   Phase 2 is replacing our original Luxn/Zhone system from 2000
    –   While Phase 1 was replacing the Movaz/Advas that came later.
    –   Those reversed due to the need to get main Ring 2 changed first.
    –   So now we’re moving on to change over the original ring 1.
    –   Luxn/Zhone dwdm now completely obsolete, really unsupported.
• Still an issue with less traffic to DC. One 10G will hold L3
  traffic to participants for awhile.
   – Very interested in hearing/collaborating on DC lambda needs.
   – There is an initiative with the Quilt for low-cost lambdas, which
     we’re hoping will result in Qwest offering to MAX and rest of
     community, feed lambdas from original DC Qwest pop.
• Get involved with TAC to hear details:
Participant Redundant Peering Initiative

                 MCLN router

                                    Have been promoting
                                    this since 2004. But
                                    now want to really
                    Fujitsu         emphasize that with
CLPK router                         new dwdm infra-
                                    structure we can easily
                                    double-peer your
   USM        NIH             JHU   campus to both routers
                                    for high-9s availa-
                                    bility. 8 folks so far.
  RFC2547 VRFs (separate routing tables)
    expansion gives participants choice
• Due to Internet2 and NLR merger not happening,
  converged network does not appear in the cards.
• This means business as usual: I2 and NLR acting as
  competitors dividing community, trying to pull RONs to
  “their” side, increasingly acrimonious.
• We intend to do our best to handle this for folks (to the
  extent possible) by playing in both camps, and offering
  participants choice.
• So have traded with VT/MATP for NLR layer 3 PacketNet
  connection, in addition to (not replacing) I2 connection.
• Technically, have implemented this on Juniper T640
  routers as additional “VRFs”: separate routing tables
  which we can move participant connections into.
• Dave Diller is “chief VRF wrangler”, did tricky “blend” work.
MAX has run VRFs for years, now has 5

            MAX Infrastructure

           I2 & NLR              Qwest   Cogent
NLR VRF                 I2 VRF
          Blended VRF            VRF      VRF

      Participant Peering Vlans
              New Service: Layer 2 vlans

• Announced in Spring member meeting.
• MAX has traditionally run Layer 1 (optical) and Layer 3
  (routed IP) service.
   – Only NGIX/E exchange point is Layer 2 service.
   – Continues to be demand for non-routed L2 service (vlans), similar
     to NLR’s FrameNet service.
• This means that folks will be able to stretch private vlans
  from DC to Mclean to Baltimore over shared 10G channel.
• Also will be able to provision dedicated ethernets.
• Next week we’re moving Cisco 6509 out to Mclean early to
  get this started, will interconnect two main switches with
  10G lambda.
• Haven’t figured out service costs yet, will involve TAC.
• Your ideas and feedback are welcome.
Long-distance                   MIT Nortel                MIT Nortel
high-level                        dwdm                      dwdm
                                 Albany                    Boston
diagram of new
dwdm system
(meeting MIT                                 MIT Nortel
                                                               Lambdas to
in Baltimore)                                   NYC

                                MIT Nortel

NLR and I2       BERnet          BERnet            BERnet
lambdas          Participants     dwdm             Participants
                                6 St Paul

       MAX             MAX
       dwdm            dwdm
       MCLN            CLPK
             New service: Flow analysis
• We announced this in the spring. Turns out it would be
  useful to be able to characterize traffic flows passing thru
  MAX infrastructure for participant use.
   – We bought Juniper hardware assists and big Linux pc with lots of
     disk to crunch and store a year’s data.
   – Using open-source Flow Tools analysis packages.
• Not snooping packets: contents not collected by Netflow, but
  does have source/destination addresses and ports. So
  there are some confidentiality issues, not anonymous yet.
• Have done a prototype for people to look at. Send email to for url and login/password if you’re
  interested in testing.
• Ideally, would like web interface where people put in AS #s
   – Then could look at flows to/from their institutions
   – Could also look at protocol (traffic type), top talkers, etc.
• Interested in people’s ideas and feedback during afternoon.
                  Peering Services

• There has been some interest in either Internet2’s
  Commercial Peering Service and/or Cenic’s TransitRail.
• Right now, we offer Cogent at $16/mb/m through GWU and
  Qwest for $28/mb/m, soon to drop several $/mb/m based on
  Quilt contracts. Demand for Cogent has been slow.
• Have been thinking about mixing one or both peering
  services in with the Cogent offering, might enable us to drop
  price to around $10/mb/m, depending on traffic mix.
• Problem is, demand has to be enough to cover the “peering
  club” and additional infrastructure costs. We tried this long
  ago with direct Cogent connection, not enough folks signed.
• Would people be interested in this? Hope to hear
  discussion and feedback in the afternoon sessions.
    Closing thought: the new Fujitsu dwdm
        system is part of a sea change
• Not just needed to create a unified infrastructure across MAX region
  and replacing aging vendor hardware.
• Also lays foundation for dynamic circuit and lambda services activities
  that are happening in the community
• Want to get people thinking about implications of dynamic allocation,
  dedicated rather than shared resources:
    – Circuit-like services for high-bandwidth low-latency projects
    – Not replacement of “regular” ip routing, but in addition
    – Possible campus strategies for fanout. Need to plan for how will deliver,
      just as BERnet doing
    – facilitating researcher use of this as it comes about.
• People may say, “We don’t have any of those applications
  on our campus yet”.
    – But suddenly may have researchers with check in hand
    – Eg, in planning phase now for DC ring, need to forecast.
    – Talk to us about what you’re doing and thinking!

To top