migration by liwenting

VIEWS: 8 PAGES: 19

									Live Migration of Virtual
       Machines
                        Motivation
•   Load management
•   Maintenance of original host
•   Original host dies.
•   Why are VMs a boon for this?
    – Avoid difficulties of process-level migration approaches, such as
      residual dependencies
    – In-memory state can be transferred in a consistent and efficient
      fashion. We can migrate an on-lime game server or streaming
      server without requiring clients to reconnect.
    – Separation of concerns between users and operators. Very
      powerful tool for cluster administrators. Separate hardware and
      software considerations, consolidate clustered hardware into a
      single coherent managed domain
               Concerns
1. Optimize Downtime
2. Total migration time
3. Don’t disrupt active services through
   resource contention by migrating the OS.
                 Previous Approaches
•   Collective
     – To provide mobility to users who work on different physical hosts at different
       times, e.g. transfer OS from work to home while on the subway.
     – Optimize for slow links and longer time spans, stops OS execution for the
       duration of the transfer, with a set of enhancements to reduce the transmitted
       image size.
•   Zap
     – Partial OS virtualization to allow migration of process domains using a modified
       LINUX Kernel. Isolate all process-to-kernel interfaces such as file handles and
       sockets into a contained namespace that can be managed.
     – Faster than collective
     – Still several seconds to migrate
•   Process Migration
     – Hot topic. Very little use for real-world applications
     – Residual dependencies: open file descriptors, shared memory segments, etc.
       Original machine must remain available.
     – Sprite requires some system calls to be forwarded to home machine.
                Design Issues
• Migrating Memory
  – How move while minimizing downtime and total
    migration time?
• Local Resources
  – What do with resources associated with the physical
    machine that they are migrating away from?
     • Memory
     • Connections to local devices such as disks and network
       interfaces
               Migrating Memory
• Three Phases of Memory Transfer:
   – Push Phase
   – Stop-and-copy Phase
   – Pull Phase
• Tradeoffs:
   – Pure stop-and-copy: simple, but potentially unacceptable
     downtime.
   – Pure demand: Brief stop-and-copy transfers essential kernel
     data structures to the destination. Destination VM then starts and
     other pages are transferred across network on first use.
       • Much shorter downtime, much longer migation time.
       • Performance after migration likely to be unacceptable: numerous
         page faults.
                This Paper
• This paper: Pre-copy migration, combining a
  bounded iterative push phase with a typically
  very short stop-and-copy phase.
• Pre-copying occurs in rounds, in which pages
  transferred during round n are those modified in
  round n-1.
• Some small set of frequently modified pages that
  need to stop for.
• Focused on the local area, not the wide area.
         Local Resources
• Network Resources
• Storage Resources
     Local Resources: Network
• Want migarted IS to maintain all open network
  connections without relying on forwarding mechanisms.
  Migrating VM should include all protocol state and will
  carry its IP address.
• Assume: In cluster environment, network interfaces of
  source and destination typically exist on single switched
  LAN.
• Generate unsolicited ARP reply from the migrated host,
  advertising that the IP has moved to a new location.
   – Reconfigures peers to send packets to the new physical address
   – Few in-flight packets might get lost.
    Local Resources: Storage
• Assume you have network-attached
  storage.
            Design Overview
•   Stage 0: Pre-Migration
•   Stage 1: Reservation
•   Stage 2: Iterative Pre-Copy
•   Stage 3: Stop-and-Copy
•   Stage 4 Commitment
•   Stage 5 Activation
       Writable Working Sets
• Biggest influence on performance is
  overhead of transferring virtual machine’s
  memory image.
        Implementation Issues
•   Managed Migration v. Self Migration
•   Dynamic Rate Limiting
•   Rapid Page Dirtying
•   Paravirtualized Optimizations
             Managed v. Self
• Managed: Outside of migratee
• Managed: Performed by migration daemons
  running in the management VMs of the source
  and destination hosts.Responsible for creating a
  new VM on destination machine and
  coordinating transfer of live system state over
  the network.
• Issue: How figure out which pages to migrate?
  – First round: all pages transferred
  – Subsequent: Those that were dirtied in previous
    round, as indicated by a dirty bitmap copied from Xen
    at the start of each round
            Managed continued
• During normal operation, page tables managed by each
  guest OS are ones walked by the processor’s MMU to fill
  the TLB.
   – Possible because guest OSes are exposed to real physical
     addresses and so page tables they create do not need to be
     mapped to physical addresses by Xen.
• To log dirty pages, Xen inserts shadow page tables
  underneath running OS. Populated on demand from
  guest page tables.
• Reset after each phase of pre-copying.
• When pre-copy phase no longer beneficial, OS is sent
  control message requesting that it suspend itself in a
  state suitable for migration.
                      Self Migration
• Majority of implementation within OS being migrated.
• No modifications necessary to Xen or management software on
  source amchine.
• Migration stub necessary on destination machine to listen for
  incoming migration requests, create appropriate empty VM, and
  receive migrated system state.
• Major implementation difficulty:
   – Transfer consistent OS checkpoint. OS must continue to run in order to
     transfer its final state.
   – Solution: logically checkpoint OS on entry to a final two-stage stop-and-
     copy phase.
       • First stage disables all OS activity except for migration and then performs a
         final scan of dirty bitmap and copies dirty ones to a shadow buffer.
       • Second/Final Stage: Transfers contents of the shadow buffer – page
         updates ignored during this transfer.
         Dynamic Rate-Limiting
• Low bandwidth limit avoids impact of running services.
  Hottest pages not amenable to precopy and can lead to
  extended downtime
• Solution: Dynamically adapt bandwidth limit.
   – First round of pre-copy: minimum allowed bandwidth.
   – Each subsequent round calculates dirtying rate: number of
     pages dirtied divided by length of phase.
   – Bandwidth limit for next round determined by adding a constant
     increment to previous round’s dirtying rate (50 Mbit/sec).
   – Terminate when calculated rate greater than administrator’s
     chosen maximum, or when not much left to transfer.
   – Last round: maximum rate.
   – Keeps bandwidth usage low during the transfer of the majority of
     the pages, increases only at the end.
         Rapid Page Dirtying
• Every OS has some “hot” pages that are
  updated extremely frequently and not much use
  to copy as they will be dirty again.
• Peek at current round’s dirty bitmap and transfer
  only those pages dirtied in the previous round
  and that have not been dirtied again at the time
  we scan them.
• Page dirtying often physically clustered.
  – Scan VM’s physical memory space in a pseudo-
    random order.
          Paravirtualized
• Stun Rogue Processes
• Freeing Page Cache Pages

								
To top