Docstoc

Trellis …

Document Sample
Trellis … Powered By Docstoc
					                   Building Fast, Flexible
                    Virtual Networks on
                   Commodity Hardware

                                       Nick Feamster
                                       Georgia Tech
Trellis: A Platform for Building Flexible, Fast Virtual Networks on Commodity Hardware, Mundada, Bhatia,
Motiwala, Valancius, Muhlbauer, Bavier, Nick Feamster, Rexford, Peterson, ROADS 2008
Building a Fast, Virtualized Data Plane with Programmable Hardware, Bilal Anwer and Nick Feamster
(In Submission)
Concurrent Architectures are Better
than One (“Cabo”)
 • Infrastructure: physical infrastructure needed to
   build networks

 • Service: “slices” of physical infrastructure from
   one or more providers




   The same entity may sometimes play these two roles.
                                                         2
Network Virtualization: Characteristics
 Sharing
 • Multiple logical routers on a single platform
 • Resource isolation in CPU, memory, bandwidth,
   forwarding tables, …

 Customizability
  • Customizable routing and forwarding software
  • General-purpose CPUs for the control plane
  • Network processors and FPGAs for data plane

                                                   3
Requirements
 • Scalable sharing (to support many networks)

 • Performance (to support real traffic, users)

 • Flexibility (to support custom network services)

 • Isolation (to protect networks from each other)




                                                      4
VINI
                 BGP
                                             BGP

     c



                                                          s
                BGP
                                                 BGP



• Prototype, deploy, evaluate new network architectures
   – Carry real traffic for real users
   – More controlled conditions than PlanetLab
• Extend PlanetLab with per-slice Layer 2 virtual networks
   – Support research at Layer 3 and above
                                                              5
PL-VINI
                            UML               • Abstractions
            XORP
                                                – Virtual hosts connected by
     (routing protocols)
                                                  virtual P2P links
     eth0    eth1   eth2    eth3                – Per-virtual host routing table,
                                                  interfaces
                                    Control
                                      Data
                     UmlSwitch                •Drawbacks
      Packet          element                   – Poor performance:
      Forward
      Engine        Tunnel table                   • 50Kpps aggregate
                       Filters                     • 200Mb/s TCP throughput
     Click                                      – Customization difficult
                      UDP tunnels
 PlanetLab VM

                                                                                    6
Trellis
      Trellis virtual host

         application                  • Same abstractions as PL-VINI
                              user        – Virtual hosts and links
                             kernel       – Push performance, ease of use
          kernel FIB                  • Full network-stack virtualization
                                      • Run XORP, Quagga in a slice
     virtual        virtual
      NIC            NIC                  – Support data plane in kernel
                                      • Approach native Linux kernel
                                        performance (15x PL-VINI)
     bridge         bridge
                                      • Be an “early adopter” of new Linux
     shaper         shaper              virtualization work

     EGRE           EGRE
     tunnel         tunnel
       Trellis Substrate
                                                                             7
Virtual Hosts
 • Use container-based virtualization
    – Xen, VMWare: poor scalability, performance


 • Option #1: Linux Vserver
    – Containers without network virtualization
    – PlanetLab slices share single IP address, port space


 • Option #2: OpenVZ
    – Mature container-based approach
    – Roughly equivalent to Vserver
    – Has full network virtualization


                                                             8
Network Containers for Linux
 • Create multiple copies of TCP/IP stack

 • Per-network container
    – Kernel IPv4 and IPv6 routing table
    – Physical or virtual interfaces
    – Iptables, traffic shaping, sysctl.net variables


 • Trellis: marry Vserver + NetNS
    – Be an early adopter of the new interfaces
    – Otherwise stay close to PlanetLab



                                                        9
Virtual Links: EGRE Tunnels
      Trellis virtual host

         application                  • Virtual Ethernet links
                              user    • Make minimal assumptions about
                             kernel     the physical network between
          kernel FIB                    Trellis nodes
                                      • Trellis: Tunnel Ethernet over GRE
     virtual        virtual             over IP
      NIC            NIC                 – Already a standard, but no Linux
                                           implementation
                                      • Other approaches:
                                         – VLANs, MPLS, other network
                                           circuits or tunnels
                                         – These fit into our framework

     EGRE           EGRE
     tunnel         tunnel
       Trellis Substrate

                                                                              10
Tunnel Termination
 • Where is EGRE tunnel interface?
 • Inside container: better performance
 • Outside container: more flexibility
    – Transparently change implementation
    – Process, shape traffic btw container and tunnel
    – User cannot manipulate tunnel, shapers
 • Trellis: terminate tunnel outside container




                                                        11
Glue: Bridging
 • How to connect virtual hosts to tunnels?
   – Connecting two Ethernet interfaces
 • Linux software bridge
   – Ethernet bridge semantics, create P2M links
   – Relatively poor performance
 • Common-case: P2P links
 • Trellis
   – Use Linux bridge for P2M links
   – Create new “shortbridge” for P2P links



                                                   12
Glue: Bridging
      Trellis virtual host

         application                  • How to connect virtual hosts to
                              user      EGRE tunnels?
                             kernel      – Two Ethernet interfaces
          kernel FIB
                                      • Linux software bridge
                                         – Ethernet bridge semantics
     virtual        virtual
                                         – Support P2M links
      NIC            NIC
                                         – Relatively poor performance
                                      • Common-case: P2P links
    bridge*        bridge*            • Trellis:
                                         – Use Linux bridge for P2M links
     shaper         shaper               – New, optimized “shortbridge” module
                                           for P2P links
     EGRE           EGRE
     tunnel         tunnel
       Trellis Substrate

                                                                                 13
IPv4 Packet Forwarding
                         900
Forwarding rate (kpps)




                         800
                         700
                         600
                         500
                         400
                         300
                         200
                         100
                           0
                               PL-VINI       Xen     Trellis (Bridge)      Trellis      Native Linux
                                                                        (Shortbridge)




                               2/3 of native performance, 10X faster than PL-VINI


                                                                                                       14
Virtualized Data Plane in Hardware
 • Software provides flexibility, but poor
   performance and often inadequate isolation

 • Idea: Forward packets exclusively in hardware
   – Platform: OpenVZ over NetFPGA
   – Challenge: Share common functions, while isolating
     functions that are specific to each virtual network




                                                           15
Accelerating the Data Plane

                       • Virtual
                         environments in
                         OpenVZ

                       • Interface to
                         NetFPGA based
                         on Stanford
                         reference router




                                            16
Control Plane
• Virtual environments
  – Virtualize the control plane by running multiple virtual
    environments on the host (same as in Trellis)
  – Routing table updates pass through security daemon
  – Root user updates VMAC-VE table


• Hardware access control
  – VMAC-VE table/VE-ID controls access to hardware


• Control register
  – Used to multiplex VE to the appropriate hardware
                                                               17
Virtual Forwarding Table Mapping




                                   18
Share Common Functions
 • Common functions
   –   Packet decoding
   –   Calculating checksums
   –   Decrementing TTLs
   –   Input arbitration

 • VE-Specific Functions
   – FIB
   – IP lookup table
   – ARP table


                               19
Forwarding Performance




                         20
Efficiency


                                 • 53K Logic Cells
                                 • 202 Units of
                                   Block RAM




   Sharing common elements saves up to 75% savings
           over independent physical routers.

                                                     21
Conclusion
 • Virtualization allows physical hardware to be
   shared among many virtual networks
 • Tradeoffs: sharing, performance, and isolation
 • Two approaches
   – Trellis: Kernel-level packet forwarding
     (10x packet forwarding rate improvement vs. PL-VINI)
   – NetFPGA-based forwarding for virtual networks
     (same forwarding rate as NetFPGA-based router, with
     75% improvement in hardware resource utilization)



                                                            22

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:9/20/2011
language:English
pages:22