Docstoc

C4

Document Sample
C4 Powered By Docstoc
					              Chapter 4

             Naming

• Part I Naming Entities
• Part II Locating Mobile Entities
• Part III Garbage Collection
   Chapter 4

  Naming

    Part I
Naming Entities
                    Names

•   Entity: anything in a DS (e.g. resource)
•   Name: a string that refers to an entity
•   Access Point: entity that allows access to
    another entity
•   Address: name of an access point (e.g. IP +
    port number)
•   An entity can have more than one access point
•   An entity can change its access point (e.g.
    mobile entities)
•   An access point can be re-used with another
    entity
        Location-independent Names

•   Separating the entity name from its
    address allows the entity to move
    (change access point) and remain
    accessible
    –   When a web site moved to a different
        machine, it stays to have the same name,
        URL
•   Such names are location-independent
          Special Kinds of Names

•   An identifier refers to at most one entity
•   An entity cannot be referred to by more
    than one identifier
•   Identifiers are not re-usable

•   Human-friendly names are desirable
    –   URLs are human friendly
    –   IPs are not
                 Name Spaces (1)
•   Name Space = an organization of names in
    a DS
•   Can be represented as a labeled directed
    graph with two types of nodes:
    –   Leaf node:
        •   Represents an entity E
        •   Stores information on E, so that it can be accessed
    –   Directory node:
        •   Points to other nodes (maintains a directory table)
        •   Each outgoing edge is labeled
            Name Spaces (2)




A general naming graph with a single root node.
               Name Spaces (3)
•    Path (name) :
            N:<label-1,label-2,…,label-n>
    – N is the first node in the path
    – Absolute path: N is the root of the name space
    – Relative: N is not the root
•    Global name: denotes the same entity,
     system-wide
    –   Interpreted w.r.t. the same directory node
•    Local name: Interpreted w.r.t. where it is
     being used
           Example Name Spaces

•   File systems:
    – /home/ug/mike/.login
    – root:<home,ug,mike,.login>


•   UNIX:
    –   Graph (tree unless linking)
    –   Leaf node : file
    –   Directory node : directory
            Name Resolution
•   A name points to the node where
    information about an entity is stored
•   Name resolution = looking up a name
•   Resolving: N:<label-1,label-2,…,label-n>
•   Look at directory table of N
•   Lookup label-1 in directory table of N
•   Look at directory table of label-1
•   … Repeat until you hit a leaf
•   A leaf contains info about physical location
          Closure Mechanism

•   Knowing how and where to start name
    resolution

•   Name resolution can take place only if we
    know how and where to start

•   Example: in UNIX, we always start at the
    inode of /
                  UNIX iNode




   The general organization of the UNIX file system
implementation on a logical disk of contiguous disk blocks.
            Linking and Mounting (1)

•   Alias: another name given to an entity
•   Implementing aliases:
    –   Give a leaf two absolute paths
    –   Symbolic link: store in a leaf an absolute path
        •    requires two name resolutions
•   Mounting: merging two name spaces
    –   Storing in a directory node X the identifier of
        directory node Y from a different NS
    –   X is called the mount point
    –   Y is called the mounting point
         Linking and Mounting (2)




The concept of a symbolic link explained in a naming graph.
    Mounting Remote Name spaces

•   Need to store at the mount point:
    –   Name of access protocol to remote server
    –   Name of the remote server
    –   Name of the mounting point
•   A URL includes all these names:
     nfs://csa.cpsc.ucalgary.ca//home/ug
    –   nfs: network file system (SUN), protocol
    –   csa.cpsc.ucalgary.ca: symbolic server name
    –   home/ug: mounting point
         Linking and Mounting (2)




Mounting remote name spaces through a specific process protocol.
     Global Name Service (GNS)

•   Combining name spaces can be also done
    by GNS
•   To combine name spaces NSi (0 < i < n)
    with roots Ri
•   Add a new root R with the Ri’s as children
•   Absolute (old) path names become relative
•   Assumes unique root names
•   Scalability?
           GNS Example




Organization of the DEC Global Name Service
    Implementation of Name Spaces

•   Naming Service: allows users/processes to
    add, remove, and look up names
•   A name space is at the heart of a naming
    service
•   Naming service is implemented by name
    servers
    –   DS on a LAN : one name server
    –   DS on a WAN: several name servers
        (distributed name space)
        Name Space Distribution (1)

•   Large scale DS: name spaces are usually
    organized hierarchically
•   Assume tree name space
•   Name space is partitioned into logical
    layers (example 3 layers)
    –   Global layer
    –   Administrational layer
    –   Managerial layer
Name Space Distribution (2)




   An example partitioning of the DNS name space,
   including Internet-accessible files, into three layers.
                  Global Layer

•   Includes higher-level nodes
    –   The root and close-by directory nodes


•   Global layer nodes are stable
    –   Their contents (directory tables) rarely change
          Administrational Layer

•   Includes directory nodes that are managed
    within a single organization
    –   All entities represented by this layer belong to
        the same organization


•   Administrational layer nodes are relatively
    stable
                Managerial Layer

•   Includes leafs and nearby directory nodes
    –   Hosts, shared files, user files


•   Managerial layer nodes are not stable
                        Zones

•   Zone: a collection of nodes that belong to
    one layer
    –   Zones cannot cross layers
•   Each zone is implemented by a separate
    name server

•   Why bother with layers?
    –   Name servers in different layers have different
        requirements (availability & performance)
        Global Layer Name Servers
•   Availability: High availability is crucial
    –   A failed server makes a big part of the name
        space unreachable
    –   Name resolution cannot proceed
    –   Replication (consistency and synchronization)
•   Performance: High performance is crucial
    –   Can resort to caching for higher performance
    –   Contents are stable, which makes caching
        efficient
    –   Replication (consistency and synchronization)
Administrational Layer Name Servers
•   Availability: High availability is crucial to
    clients within the organization
    –   Temporary unavailability for out-side clients is
        acceptable
    –   Replication (consistency and synchronization)
•   Performance: High performance is crucial
    –   Can resort to caching for higher performance
    –   Contents are relatively stable, which makes
        caching efficient
    –   Replication + high-performance machines for
        higher response time
    Managerial Layer Name Servers

•   Availability: Less demanding
    –   Single dedicated machine for name servers


•   Performance: High performance is crucial
    –   Contents are not stable, which makes caching
        inefficient
               Name Space Distribution

                 Item               Global    Administrational    Managerial

Geographical scale of network     Worldwide   Organization       Department

Total number of nodes             Few         Many               Vast numbers

Responsiveness to lookups         Seconds     Milliseconds       Immediate

Update propagation                Lazy        Immediate          Immediate

Number of replicas                Many        None or few        None

Is client-side caching applied?   Yes         Yes                Sometimes




A comparison between name servers for implementing nodes from a
  large-scale name space partitioned into a global layer, as an
  administrational layer, and a managerial layer.
    Implementation of Name Resolution(1)

•    How does resolution proceed?

•    Each client has access to a local name server,
     which ensures name resolution is carried out

•    Names can be resolved:
     –   Iteratively
     –   Recursively
Implementation of Name Resolution(2)

Example:

• root:<nl,vu,cs,ftp,pub,globe,index.txt>
URL: ftp://ftp.cs.vu.nl/pub/globe/index.txt

•   Only <nl,vu,cs,ftp> is sent for resolution
    Iterative Name Resolution




The principle of iterative name resolution.
Recursive Name Resolution (1)




The principle of recursive name resolution.
         Recursive Name Resolution (2)
Server for      Should                    Passes to     Receives        Returns to
                              Looks up
  node          resolve                     child      and caches       requester

cs           <ftp>            #<ftp>     --            --             #<ftp>

vu           <cs,ftp>         #<cs>      <ftp>         #<ftp>         #<cs>
                                                                      #<cs, ftp>

ni           <vu,cs,ftp>      #<vu>      <cs,ftp>      #<cs>          #<vu>
                                                       #<cs,ftp>      #<vu,cs>
                                                                      #<vu,cs,ftp>
root         <ni,vu,cs,ftp>   #<nl>      <vu,cs,ftp>   #<vu>          #<nl>
                                                       #<vu,cs>       #<nl,vu>
                                                       #<vu,cs,ftp>   #<nl,vu,cs>
                                                                      #<nl,vu,cs,ftp>


     Recursive name resolution of <nl, vu, cs, ftp>. Name servers
       cache intermediate results for subsequent lookups.
Iterative versus Recursive Resolution (1)
•   Recursive method puts higher performance
    demand on each name server
    –   Global layer: use iterative or recursive method?


•   Recursive method works better with caching

•   Recursive method can reduce communication
    cost
Iterative versus Recursive Resolution (2)




    The comparison between recursive and iterative name
        resolution with respect to communication costs.
       Domain Name System (DNS)

•   The Internet naming service

•   Probably, the largest distributed naming system
    in-use

•   Primarily used for looking up host addresses
    and mail severs
                 DNS History (1)

•   ARPANET utilized a central file HOSTS.TXT
    –   Contains names to addresses mapping
    –   Maintained by SRI’s NIC


•   Administrators email changes NIC
    –   NIC updates HOSTS.TXT periodically
•   Administrators FTP (download) HOSTS.TXT
                   DNS History (2)

•   As the system grew, HOSTS.TXT had
    problems with:
    –   Scalability (traffic and load)
    –   Name collisions
    –   Consistency

•   In 1984, Paul Mockapetris released the first
    version (RFCs 882 and 883, superseded by
    1034 and 1035 …)
               DNS Name Space (1)

•   Rooted tree
•   Label : case-sensitive, alpha numeric string
    –   Max length = 63 chars
    –   Length of a complete path is at most 255 chars
    –   Path name is written in reverse, labels are separated
        by “.”
    –   The label of the incoming edge is the name of the
        node
            DNS Name Space (2)

•   Domain: sub-tree in the name space
•   Domain Name: path name to the root of the
    domain
•   Subdomain: a domain, used in relative terms

•   Contents of a node: a collection of resource
    records
                 DNS Name Space (3)
Type of   Associated
                                                  Description
record      entity
SOA       Zone         Holds information on the represented zone
A         Host         Contains an IP address of the host this node represents
MX        Domain       Refers to a mail server to handle mail addressed to this node
SRV       Domain       Refers to a server handling a specific service
NS        Zone         Refers to a name server that implements the represented zone
CNAME     Node         Symbolic link with the primary name of the represented node
PTR       Host         Contains the canonical name of a host
HINFO     Host         Holds information on the host this node represents
TXT       Any kind     Contains any entity-specific information considered useful



The most important types of resource records forming the
       contents of nodes in the DNS name space.
             Top-Level Domains

•   US: com, edu, gov, mil, net, org, int

•   Country specific (two letter indication, ISO
    3166): ca, us, uk, lb, nl, jp …

•   One level down (ISO 3166): com, edu, co, ac
          Parsing Domain Names

•   www.cpsc.ucalgary.ca
•   ucalgary.ca: The university of Calgary
    Domain
•   cpsc: computer science subdomain of
    ucalgary.cs
•   www: a particular host in the domain
                                       Not part
     www.cpsc.ucalgary.ca/~kawash       of DNS
                      Delegation

•   A domain is divided into subdomains
•   The subdomains can be delegated to other
    organizations
    –   Allows scalable distribution of the naming service


•   Domain: ucalgary.ca subdomain of ca
•   Parent domain (ca) delegates the name to the
    organization that maintains the domain
              DNS Implementation

•   DNS is divided into two layers:
    –   Global and administrational
    –   Managerial exists, but not part of DNS
•   Each zone is implemented by a name server
    –   A program that stores domain name space info
•   The name server is said to have authority for
    the zone
           Replicating Name Servers

•   Name servers are always replicated
•   One master replica (primary)
•   Several slave replicas (secondary)
•   Both can have the authority of the zone

•   This mechanism provides:
    –   Availability
    –   Consistency
             Primary Master NS


•   Primary Master NS for a zone: reads the zone
    data from a file on its host

•   Updates to a name server are performed on the
    primary replica (Master NS)
              Secondary Master NS

•   Secondary Master NS for a zone: reads the
    zone data from another NS, which has the
    authority for the zone
    –   Typically, the primary master
    –   Also can be a secondary


•   At start up, a secondary NS contacts the
    primary and transfers the zone
                Choosing A Server

•   Resolving a name, which replica (authoritative
    of the zone) do we talk to?

•   BIND uses a roundtrip time metric
•   Initially it assigns NS random (small) RTT
    –   Gives chance to every NS
•   Measure the time it takes NS to respond
•   Choose the one with smaller RTT
                  Zone Data Files

•   A DNS database is implemented as a small
    collection of files (zone data files)
    –   Maintained by primary servers


•   Data files contain the resource records
    describing the hosts in the zone
    –   They also mark delegation of subdomains
              DNS Zone File (1)
An excerpt
  from the
    DNS
 database
   for the
    zone
  cs.vu.nl.
                  DNS Zone File (2)

           Name      Record type    Record value

cs.vu.nl             NIS            solo.cs.vu.nl

solo.cs.vu.nl        A              130.37.21.1




     Part of the description for the vu.nl domain
          which contains the cs.vu.nl domain.
                      Resolvers

•   Name servers are servers!
•   Resolvers are the clients of NS
•   To access DNS, you go through resolvers
•   They can:
    –   Query NS
    –   Process NS response
    –   Pass information back to the requesting program
•   Note that resolvers are servers for client
    programs!
         DNS Name Resolution (1)


•   BIND: Both recursive and iterative:

•   recursive resolution can be ignored by a NS
         DNS Name Resolution (2)
•   Resolver sends recursive query to a local NS A
•   A sends iterative query to NS B
•   B could refer A to C
•   A sends iterative query to NS C
•   C could refer A to D
•   Until name is resolved
•   A gives answer to resolver
      DNS Name Resolution (3)

                     Iterative       B
                    Query (IQ)
        Recursive                Ask C
Resolver Query      A       IQ
         Answer           Ask D
                                     C
                            IQ


                        Answer
                                     X
                     BIND

•   BIND: Berkeley Internet Name Domain
•   Most popular implementation of DNS
•   Maintained by Internet Software Consortium
    (www.isc.org/bind.html)

•   FTP: ftp.isc.org, download from:
    /isc/bind9/9.1.0/bind-91.0.gz
       Chapter 4

      Naming

         Part II
Locating Mobile Entities
              Name Types

•   Human Friendly
•   Identifiers
•   Addresses

•   Naming System: maintains a mapping
    from human=friendly names to addresses
            Locating Entities


•   Web site: www.cpsc.ucalgary.ca
•   To access it, need to look it up
•   Send one request to the NS of
    cpsc.ucalgary.ca
•   Receive the address of the site
           Mobile Entities (1)


•   What if www.cpsc.ucalgary.ca moves to
    pages.cpsc.ucalgary.ca

•   Change the NS for cpsc.ucalgary.ca
•   Look up is not affected
          Mobile Entities (2)


•   What if www.cpsc.ucalgary.ca moves to
    www.cs.syracuse.edu
•   The name www.cpsc.ucalgary.ca should
    not change
•   Solution 1: Record the new address in
    NS for cpsc.ucalgary.ca
•   Solution 2: Record the new name in NS
    for cpsc.ucalgary.ca
           Recording the Address


•   Look up operation is not affected

•   What if www.cpsc.ucalgary.ca moves
    again to www.java.com ?
    –   Record the new address in NS for
        cs.syracuse.edu
    –   No longer a local update
            Recording the Name


•   Look up operation is affected
    –   If site moves n times, we need n look ups
    –   Not suitable for highly mobile entities


•   Do not need to modify the entry in NS for
    cpsc.ucalgary.ca if the server moves
    again
Naming versus Locating Entities (1)


•   Traditional naming systems (e.g. DNS)
    are not appropriate for mobile entities
•   Direct mapping between human-friendly
    names and addresses
•   Need to separate naming from locating
    entities
•   Introduce indentifiers
Naming versus Locating Entities (2)




a)   Direct, single level mapping between names and addresses.
b)   Two-level mapping using identities.
Naming versus Locating Entities (3)

•   DNS:
    –   f: Names  P(Addresses)
    –   f(www.cpsc.ucalgary.ca) = {12.23.12.32, …}


•   Two-Level Mapping:
    –   Naming - f: Names  Identifiers
    –   Location - g: Identifiers  P(Addresses)
    –   f(www.java.com) = 12727
    –   G(12727) = {35.98.6.0, …}
Implementation of Location Service


•   Simple Solutions
    –   Broadcasting and Multicasting
    –   Forwarding pointers
•   Home-based Solutions
•   Hierarchical Solutions
              Broadcasting


                              Reply to
  Need
Address for       IGNORE      Request
 Entity(A)         Request

                              A




                    Address
                     of A
                  Multicasting
•   Broadcasting is not suitable for larger
    networks
    –   Bandwidth is wasted
    –   Hosts are interrupted for no reason
•   For larger networks, resort to multicasting
    –   Hosts join multicast groups
    –   A multicast group has a multicast address
    –   When a message is sent to a group,the network
        layer delivers it to all group members
          Forwarding Pointers
•   When an entity moves from A to B, it leaves
    a pointer to B at A
•   Simple: use traditional naming service to
    locate A, then follow the pointer chain
•   Expensive if chain gets too large
•   All intermediate locations must keep the
    chains long enough
•   Lost pointer?
   Forwarding Pointers – Example




The principle of forwarding pointers using (proxy, skeleton) pairs.
Short-Cutting a Chain (1)




Redirecting a forwarding pointer, by storing a
               shortcut in a proxy.
                 Comparison
•   Broadcasting:
    –   Scalability problems
    –   Efficiency problems in large scale systems
•   Multicasting:
    –   Efficiency problems in large scale systems
•   Forwarding Pointers:
    –   Scalability problems
    –   Long chains: performance problems
    –   Prone to failure
     Home-Based Approaches


•   Home-location: popular for supporting
    mobile entities in large-scale networks

•   Keeps track of the current location
•   Often, the place where the an entity is
    created
           Fallback Mechanism
•   Fallback Mechanism: with forwarding
    pointers
•   When a chain is broken, the home location
    is consulted for object’s current location
•   Fault-tolerance is needed to store the
    reference at home location

•   What if home-location needs to change?
    –   Use traditional naming service to record the
        current location
                 Mobile IP (1)

•   Assign a fixed IP (home location) to a
    mobile host
•   Contact host through home location
•   Fixed home location
    –   Must always exist
    –   An entity moves permanently?
•   Higher communication latency
    –   To solve, use caching
    –   Two-level hierarchy
  Mobile IP (2)




The principle of Mobile IP.
   Hierarchical Approaches (1)




Hierarchical organization of a location service into
 domains, each having an associated directory node.
       Hierarchical Approaches (2)

•   One top level domain
•   Several sub-domains
•   Lowest-level domain = leaf, stores address
•   A domain D has directory dir(D)
•   dir(D) keeps track of all entities in D
•   Each entity is represented by a location
    record in dir(D)
•   Root has records for al entities
Hierarchical Approaches – Replicating Entities




       An example of storing information of an entity
        having two addresses in different leaf domains.
Hierarchical Approaches – Lookup Operation




Looking up a location in a hierarchically organized location service.
Hierarchical Approaches – Update Operation




 a)   An insert request is forwarded to the first node that
      knows about entity E.
 b)   A chain of forwarding pointers to the leaf node is
      created.
Hierarchical Approaches – Delete Operation


•   Delete Replica R of Entity E from domain D
•   Delete pointer from dir(D) to R
•   If location record of E at dir(D) is empty,
    delete record
•   Apply recursively, going up the tree
              Pointer Caches (1)




Caching a reference to a directory node of the lowest-level
  domain in which an entity will reside most of the time.
                Pointer Caches (2)




A cache entry that needs to be invalidated because it returns a
     nonlocal address, while such an address is available.
                  Scalability Issues




 The scalability issues related to uniformly placing subnodes of a
partitioned root node across the network covered by a location service.
     Chapter 4

   Naming

     Part III
Garbage Collection
           Garbage Collection

•   When an entity can no longer be accessed
    it should be removed

•   GC can be done explicitly or implicitly

•   GC is used in centralized and distributed
    systems, but it is more involved in a DS
          Referencing Problem

•   When no more references exist to an
    object, it should be removed

•   A referenced object does NOT mean the
    object will be used

•   There could be two or more objects
    referencing each other but none of them
    is reachable
        GC in Centralized Systems

•    Construct a directed graph G = (V,E):
    – V : objects
    – E : references between objects
    An edge (a,b) indicates a references b
•    V has a subset called root set:
    –   These objects do not need to be referenced
        (e.g. users, system wide services)
•    If there is no path from the root set (any
     node in the set) to an object O, O must be
     removed
The Problem of Unreferenced Objects




 An example of a graph representing objects containing
                references to each other.
        GC in Distributed Systems

•   GC in centralized systems can be adapted
    to solutions in DS

•   Difficulties:
    –   Solution affects the scalability and
        performance of the DS. Solution itself must
        be scalable and efficient
    –   Machines and processes can fail
          Approaches to DGC


1. Reference Counting

2. Reference Listing

3. Identifying unreachable entities
        Basic Reference Counting

•   Maintain a counter of references with each
    object
•   Increment counter with each created ref
•   Decrement counter with each removed ref
•   Remove object when counter = 0

•   Need to store counter somewhere
    –   Assume in the skeleton
Problems with Basic Reference Counting

 •   Problem 1: communication might not be
     reliable
     –   Detect duplicate messages


 •   Problem 2: accessing counter can result in
     a race condition
     –   Improved reference counting
    Unreliable Communication




The problem of maintaining a proper reference count in the
          presence of unreliable communication.
      Race Conditions




a) Copying a reference to another process
   and incrementing the counter too late
b) A solution.
    Weighted Reference Counting (1)

•    Solve race condition with counter access
•    Use only decrement!
•    Associate with each object two values,
     kept in skeleton:
    –   Fixed total weight, TW
    –   Partial weight, PW
    –   Initially TW = PW
    –   Choose arbitrary (is it??) initial value
    Weighted Reference Counting (2)

•    When creating an object reference:
    –   Split PW between skeleton and proxy


•    When copying an object reference:
    –   Split PW between both proxies


•    When removing an object reference x:
    –   Send a message to skeleton to subtract x’s PW
        from TW
Weighted Reference Counting (3)




  a)   The initial assignment of weights in weighted
       reference counting
  b)   Weight assignment when creating a new
       reference.
 Weighted Reference Counting (4)




c) Weight assignment when copying a reference.
Weighted Reference Counting Problems

 •   Also assumes reliable communication

 •   The number of references that can be
     created is limited by the value chosen for
     TW
     –   When PW = 1, cannot create more references


 •   Solve by indirection
    Indirection with Reference Counting

•    Similar to forwarding pointers

•    When cannot create a new reference (PW = 1)
    –   Create a new skeleton with new TW and PW.


•    Problem: long chains can be burden on
     performance and subject to failure
                  Indirection




Creating an indirection when the partial weight of a
               reference has reached 1.
    Generation Reference Counting (1)
•    Each proxy stores a pair (g,c):
     –   g = a generation number
     –   c = number of times this proxy has copied the object
•    Take on object O at process Q
•    P1 makes a ref to O: (g,c)P1 = (0,0)
•    P2 makes a ref to O: (g,c)P2 = (0,0)
•    P2 copies O’s ref to P3: (g,c)P1 = (0,1) and (g,c)P3
     = (1,0)
•    P3 copies O’s ref to P4: (g,c)P3 = (1,1) and (g,c)P3
     = (2,0)
Generation Reference Counting (2)




   Creating and copying a remote reference in
          generation reference counting.
    Generation Reference Counting (3)
•    The skeleton maintains a table G
     –   G[i] = number of outstanding copies for
         generation i
•    When a proxy (at P1) is removed:
     –   A message containing (g,c)P1 is sent to the
         skeleton
     –   The skeleton performs: G[g] = G[g] – 1
     –   And: G[g+1] = G[g+1] + c
•    When G[i] = 0 for each i, object is
     removed
    Generation Reference Counting (4)


•    Also assumes reliable communication

•    Can handle multiple references without the
     need to communicate with the skeleton
     every time a reference is copied
            Reference Listing (1)
•   A skeleton keeps track of all proxies
    referring to it
•   A skeleton maintains a reference list, not
    only a counter
•   Idempotent operations can be performed
    on the list
    –   An operation can be repeated many times
        without affecting the list
    –   Adding and removing items
            Reference Listing (2)
•   When creating a new reference:
    –   Keep on sending add messages until an ack is
        received


•   When removing a reference:
    –   Keep on sending remove messages until an
        ack is received


•   Are increment and decrement Idempotent?
    Java RMI Reference Listing (1)
•   To create an object reference:
    –   P creates a remote reference and sends a
        message to the skeleton
    –   P waits for ack
    –   When ack is received, P created the proxy
•   To pass a ref from P1 to P2:
    –   P1 sends a copy of its proxy of O to P2
    –   P2 asks O’s skel to be added to the ref list
    –   When P2 received an ack, P2 installs the
        proxy
    Problems with Reference Listing

•   Race conditions can occur:
    –   P1 removes its proxy before P2 installs it!
    –   Server might mistakenly GC the object
•   May not scale well:
    –   List may get long
    –   To limit the list size, use leasing
•   Requires Acknowledgement mechanism
    Advantages of Reference Listing

•   Does not assume reliable communication
    –   Skeleton regularly pings clients to see if
        they’re alive
    –   If a client is pinged several times without
        responding, consider it crashed


•   No need to detect duplicate messages
                Ping in Java
import java.io.*;
import java.net.*;
public class ping {
   public static void main (String a[]){
     /* check for argument here */
     String host = a[0];
     /* port 13 is the daytime port */
     Socket s = new Socket(host, 13);
     BufferedReader b = new BufferedReader(new
     InputStreamReader(s.getInputStream()));
     String timeAtHost = b.readLine();
     System.out.println(host + “ is up at” +
   timeAtHost);}}
    Identifying Unreachable Entities

•   The techniques we discussed so far do not
    identify unreachable entities

•   To identify those, we need to trace all the
    entities in a DS

•   Trace-based GC has inherit scalability
    problems
    Tracing in a Centralized System
•   Simplest approach works in two phases:
    –   Mark-and-sweep
    –   Maintain information in a central table

•   Mark phase: trace entities from root set
    and mark reachable entities

•   Sweep phase: exhaustively examine
    memory to locate and reclaim unmarked
    entity
         Mark-and-Sweep in a DS
•   Mark Phase: Concurrently, run mark
    locally in each process
    –   Local GCs color objects, skeletons, and
        proxies
•   Colors are: White: Unreachable, Black:
    Reachable, Gray: Temporary color
•   Sweep Phase: Concurrently, GC white
    objects

•   Initially, mark everything white
        Mark Phase in a DS (1)

1. Mark Objects: if an object O is reachable
   from the root set or if its skeleton is
   marked gray, marked O gray
2. Mark Proxies: all proxies in a gray object
   are marked gray
3. Mark Skeletons: When a proxy is marked
   gray, a message is sent to the pertaining
   skeleton to mark itself gray
        Mark Phase in a DS (2)

5. Mark Objects: When and object O and all
   its proxies are marked gray, mark O black
6. Mark Skeletons: Mark O’s (O is black)
   skeleton black
7. Mark Proxies: Send messages to O’s
   proxies (pointing to O) to be marked black
8. Repeat until all objects, skeletons, and
   proxies have been marked white or black
        Drawback of Mark-and-Sweep

•   The graph cannot change while GCing
•   Need to stop-the-world for it to work
•   Can be done by two-mode execution:
    –    Execution mode : GC cannot be done
    –    GC mode : normal process execution cannot
         be done
•   Stop-the-world is undesirable
             Tracing in Groups
•   To improve scalability
•   Divide processes into groups
•   Organize groups in a hierarchy
•   GC takes place within each group through
    a combination of:
    –   Mark-and-sweep and
    –   Reference counting
•   Step 1: GC inside each group
•   Step 2: GC with macro-groups
Tracing Inside a Group – Assumptions

•   An object reference is (proxy, skeleton)
•   For each object there is one skeleton only
•   Each object has several proxies
•   A skeleton maintains a counter of proxies
•   A process can have at most one proxy per
    remote object
                   Colors

•   Only skeletons and proxies are marked
•   Skeletons are marked with two colors:
    –   Soft
    –   Hard
•   Proxies are marked with three colors:
    –   Soft
    –   Hard
    –   None
                Skeleton Coloring

•   Hard Skeleton:
    –   Reachable from a root object inside the group, or
    –   Reachable from a proxy in a process outside the
        group
•   Soft Skeleton:
    –   Reachable from proxies inside the group


•   Color can only change from soft to hard
                  Proxy Coloring

•   Hard Proxy:
    –   Reachable from a root object
•   Soft Proxy:
    –   Reachable from a soft skeleton


•   Color can only change from none to hard
               Intragroup GC

1. Color skeletons
2. Propagate skeleton colors to proxies within
   each process (intraprocess)
3. Propagate proxy colors to skeletons in other
   processes (interprocess)
4. Repeat steps 2 and 3 until a stable state is
   reached
5. GC
              1. Color Skeletons

•   For each skeleton sk:
•   igp = count sk’s associated proxies inside the
    group
•   ogp = (sk’s counter of proxies) – igp
•   If ogp == 0 then color sk soft
•   Else color sk hard

Now, all skeletons are colored either soft or hard
1. Color Skeletons – Example




   Initial marking of skeletons.
        2. Intraprocess Color Propagation
             (Skeletons to Proxies) (1)
•   For each process P
•   Run P’s Local GC
•   Regardless of how P’s LGC works, it must:
•   Propagate colors of skeletons to proxies
    within P
    –    Color proxies with the color of their associated
         skeleton
•   If a proxy is reachable from a root object,
    mark it hard
        2. Intraprocess Color Propagation
             (Skeletons to Proxies) (2)
Precisely,
• Color all proxies none
• Propagate hard colors:
    –    Trace from hard skeletons and root objects
    –    Propagate hard color to all reachable proxies
•   Propagate soft colors:
    –    Trace from soft skeletons
    –    Propagate soft color to all reachable none proxies
Now, all proxies in P are none, soft, or hard
2. Intraprocess Color Propagation –
              Example




 After local propagation in each process.
    3. Interprocess Color Propagation
           (Proxies to Skeletons)
•    For each proxy px
•    If px is colored hard
•    Send a message to px’s associated skeletons inside
     the group to mark themselves hard
(if px is colored soft or none, no propagation is needed)

Now, some objects will turn from soft to hard, we
have to make further local changes
                4. Stabilization


•   Repeat steps 2 and 3 until colors cannot be
    propagated (neither locally nor globally)

When steps 2 and 3 can no longer be repeated,
stabilization is reached
Stabilization – Example




      Final marking.
               5. Reclamation


•   GC soft proxies and skeletons
•   GC objects associated with soft skeletons
•   For none proxies, send decrement messages to
    associated skeletons and GC them
                Intergroup GC

•   Form larger groups (macro-groups) by
    combining groups
•   Perform Intragroup GC on macro-groups

•   Groups reduce the number of objects to be
    traced
•   This could scale better than simple tracing

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:11/8/2012
language:Latin
pages:138