Docstoc

synchronization

Document Sample
synchronization Powered By Docstoc
					E81 CSE 532S: Advanced Multi-Paradigm Software Development




      Synchronization Patterns

     Christopher Gill, Todd Sproull, Eric DeMello
      Department of Computer Science and Engineering
             Washington University, St. Louis
                   cdgill@cse.wustl.edu
          An Illustrative Haiku


        Threads considered bad.
         So non-deterministic.
        What will happen nxte?

- Justin Wilson, Magdalena Cassel, Adam Drescher, Chris Gill
                     Part I
•   Multi-Threaded Programming
•   Synchronization Patterns
•   Scoped Locking Pattern
•   Thread Safe Interface Pattern
      Multi-Threaded Programming
• Concurrency
  – Logical (single processor): instruction interleaving
  – Physical (multi-processor): parallel execution
• Safety
  – Threads must not corrupt objects or resources
  – More generally, bad inter-leavings must be avoided
     • Atomic: runs to completion without being preempted
     • Granularity at which operations are atomic matters
• Liveness
  – Progress must be made (deadlock is avoided)
  – Goal: full utilization (something is always running)
 Multi-Threaded Programming, Continued
• Benefits
  – Performance
     • Still make progress if one thread blocks (e.g., for I/O)
  – Preemption
     • Higher priority threads preempt lower-priority ones
• Drawbacks
  – Object state corruption due to race conditions
  – Resource contention (overhead, latency costs)
• Need isolation of inter-dependent operations
  – For concurrency, synchronization patterns do this
     • At a cost of reducing concurrency somewhat
     • And at a greater risk of deadlock
 Multi-Threaded Programming, Continued
• Race conditions (threads racing for access)
  – Two or more threads access an object/resource
  – The interleaving of their statements matters
  – Some inter-leavings have bad consequences
• Example (critical sections)
  – Object has two variables x Є {A,C}, y Є {B,D}
  – Allowed states of the object are AB or CD
  – Assume each write is atomic, but writing both is not
  – Thread t writes x = A; and is then preempted
  – Thread u writes x = C; y = D; and blocks
  – Thread t writes y = B;
  – Object is left in an inconsistent state, CB
 Multi-Threaded Programming, Continued
• Deadlock
  – One or more threads access an object/resource
  – Access to the resource is serialized
  – Chain of accesses leads to mutual blocking
• Single-threaded example (“self-deadlock”)
  – A thread acquires then tries to reacquire same lock
  – If lock is not recursive thread blocks itself
• Two thread example (“deadly embrace”)
  – Thread t acquires lock j, thread u acquires lock k
  – Thread t tries to acquire lock k, blocks
  – Thread u tries to acquire lock j, blocks
         Synchronization Patterns
• Scoped Locking (similar to C++ RAII Idiom)
  – Ensures a lock is acquired/released in a scope
• Thread-Safe Interface
  – Reduce internal locking overhead
  – Avoid self-deadlock
• Strategized Locking
  – Customize locks for safety, liveness, optimization
• Double-Checked Locking Optimization
  – Reduce contention and locking overhead for
    acquire-only-once locks
• Complement concurrency patterns we‟ll cover
              Scoped Locking Pattern
• Intent
  – Ensures lock is acquired when control enters a scope and is
    released automatically when control leaves, by any path
• Example (from POSA 2)
  Class Hit_Counter {
     public:
        bool increment (const string &path) {
            lock_.acquire();
            Table_Entry *entry = lookup_or_create (path);
            if (entry ==0) {
                lock_.release();
                return false;
            }
            else {
                entry->increment_hit_Count ();
                lock_.release ();
                return true;
             }
         }


  What synchronization problems might occur with this example?
       Scoped Locking, Continued
• Problems in the example
  – Revisions of the code might fail to release the lock
    on some return paths (“maintenance scars”)
  – Either of the function calls might throw an
    exception, in which case lock is not released
• In general
  – Code that should not execute concurrently, should
    be protected (made atomic) by a lock
  – However it is hard to ensure that locks are
    released in all paths through the code
     • C++ code can leave a scope due to a return, break,
       continue, or goto statement, or a propagating exception
        Scoped Locking, Continued
• Solution
   – Define a guard class whose constructor automatically acquires a
     lock when control enters a scope
   – Destructor automatically releases the lock when it leaves the scope

class Thread_Mutex_Guard {
public:
    Thread_Mutex_Guard (Thread_Mutex &lock)
        :lock_ (&lock), owner_ (false) {
           lock_->acquire();
           owner_ = true;
    }
~Thread_Mutex_Guard() {
            if (owner_) lock->release ();
}
private:
      Thread_Mutex *lock;
      bool owner_;
      Thread_Mutex_Guard (const Thread_Mutex_Guard &);
      void operator = (const Thread_Mutex_Guard &);
};
          Scoped Locking, Continued
• Solution, Continued
   − Let critical sections correspond to the scoped lifetime of a guard object

   Class Hit_Counter {
      public:
         bool increment (const string &path) {
              Thread_Mutex_Guard guard (lock_);
              Table_Entry *entry = lookup_or_create (path);
              if (entry ==0) {
                  // lock_.release();
                   return false;
              }
              else {
                   entry->increment_hit_Count ();
                   // lock_.release ();
                    return true;
                }
          }


   Why do we not need the lock_release() calls any more?
       Scoped Locking, Continued
• May want to release lock explicitly without
  leaving method
  – Add public acquire and release methods to guard
  – Must keep track of lock ownership to avoid double
• Writing a separate guard class for each type
  of lock is tedious and error prone
  – Provide a guard class template parameterized on
    lock type
  – Provide a hierarchy of lock types with a common
    (abstract) base class
       Scoped Locking, Continued
• Known Uses
  – Booch Components
    • First C++ class libraries to use this idiom for multi-
      threaded programs
  – ACE
    • ACE_Guard class template
  – Java
    • Programming language feature called a synchronized
      block
    • Generates an exception handler that ensures that a lock
      is released if an exception occurs in a synchronized
      block
    Scoped Locking Consequences
• Increased robustness
• Potential for deadlock when used recursively
  – Self deadlock could occur if the lock is not recursive
• Limitations with language-specific semantics
  – Based on scope semantics of C++
  – Other languages may not support this idiom readily
     • E.g., C longjmp() function does not call C++ destructors
      Thread-Safe Interface Pattern
• Intent
  – Minimizes locking overhead
  – Ensures intra-component method calls do not „self-deadlock‟
• Context
  – Intra-Component method calls
     • public methods (accessible from outside a class)
     • private implementations which change component state
  – Recursive mutex: higher overhead
  – Non-recursive mutex: risk of deadlock
Thread-Safe Interface, Continued

• Non-Recursive Mutex: Deadlock Example
Thread-Safe Interface, Continued
• Recursive Mutex: Overhead Example
    Thread-Safe Interface, Continued
• Solution
  – Separate locking from
    implementation
  – Encapsulate acquire/release
    within public interface methods
     • “at the border”
  – Encapsulate implementation in
    private methods
     • Do not acquire/release
     • Crucial restriction: do not “call up”
       to public interface methods
  Thread-Safe Interface, Continued
• Variants
  – Thread-Safe Façade
       • Synchronize an entire subsystem
       • Analogous to calls to OS kernel, that block until completion
  – Thread-Safe Wrapper Façade
       • Nested monitor lockout -- one thread holds locks on two
         objects that require each other
       • Provide wrapper class as synchronization proxy
• Benefits
  –   Helps prevent Intra-Component-Incurred-Self-Deadlock
  –   Helps avoid unnecessary acquire/release calls
  –   Simplifies software for multi-threaded programming
  –   Allows addition of thread-safe wrappers to legacy code
    Thread-Safe Interface, Continued
• Pitfalls:
   – Extra methods
      • Due to method indirection
      • Can ease cost with inlining
   – Self-Deadlock still possible
      • Inter-Component method calling
      • Calls from internal methods up to public methods
   – Potential overhead
      • Synchronization overhead from multiple locks
      • Lock contention
   – “Honor System”
      • Have to trust private methods perform correctly
   – Legacy code
      • Private implementations may have internal concurrency
   – Even in many O-O languages can bypass private “screens”
       Questions for Discussion
• What are the key differences between
  applications designed for a single thread
  versus multiple threads?
• What types of problems can arise?
• What is meant by “safety”? By “liveness”?
• How do the 2 synchronization patterns we
  covered today help with these problems?
• Which patterns apply in which contexts?
                  Part II
• Strategized Locking Pattern
• Double-Checked Locking Optimization
  Pattern
• Review of Synchronization Patterns
           Strategized Locking Pattern
• Intent
  – Parameterizes synchronization mechanisms that protect a
    component‟s critical section from concurrent access
• Context
  – Components can be re-used efficiently within a variety of
    different concurrent applications
  – Different applications might need different synchronization
    strategies
     • Mutex
     • Readers/Writer locks
     • Semaphores
• Decouple application logic from the locks
  – Enhancements, bug fixes should be straight forward
  – Can modify locks without changing application logic
  – Can modify application w/o changing lock implementations
       Strategized Locking, Continued
Class File_Cache_Single_Threaded {
public:
   const void *lookup(const string &path) const {
      const void *file_pointer = 0;
      // look up file in cache
      return file_pointer;
private:
   //no lock required
};
                        Class File_Cache_Thread_Mutex{
                        public:
                            const void *lookup(const string &path) const {
                                  Thread_Mutex_Guard_guard (lock_);
                                  const void *file_pointer = 0;
                                  // look up file in cache
                                  return file_pointer;
                        private:
                           Thread_Mutex lock_;
                        };

Goal: avoid multiple copies of similar code just for different locking strategies
     Strategized Locking, Continued
• Solution
  – Parameterize a components synchronization aspects
  – Define pluggable types
     • E.g., mutex, readers/writers lock, semaphore
• Implementation
  – Define basic component behavior and interfaces
  – Strategize the component‟s locking mechanism(s)
     • I.e., define lock concept, or abstract interface
  – Update the component interface and implementation to
    protect critical sections (safety)
     • E.g., using the Scoped Locking Idiom
  – Refine component implementation to avoid deadlock
    (liveness) and to optimize performance
  – Complete a family of locking strategies
     • E.g., adding Null implementations of locks for single threaded case
     Strategized Locking, Continued

• Define basic component implementation and interface
  – Without concern for component‟s synchronization aspects

  class File_Cache {
   public:
        const void *lookup (const string &path);
   private:
            …..
  };
     Strategized Locking, Continued
• Strategize the locking mechanism
  – Choose polymorphism or parameterized types for a uniform
    strategy
  – Define an abstract interface for the locking mechanism
  – Define a guard class that is strategized by its
    synchronization aspect
• Update component interface and implementation
  – Use synchronization to protect critical sections

  template <class LOCK>
  class File_Cache {
   public:
        // BTW, why can’t this be a const method?
        const void *lookup (const string &path) {
        Guard<LOCK> guard (lock_);
        //implement the lookup method.
   private:
        LOCK lock_;
  };
       Strategized Locking, Continued
• Revise component implementation to avoid deadlock
  –   Watch for intra-component method invocations
  –   Be careful to avoid self-deadlock
  –   Remove unnecessary synchronization overhead
  –   Thread-Safe Interface pattern may be useful
       • Provides techniques to prevent some of these problems
• Family of locking strategies with a common interface
  – E.g.,
       •   Recursive and non-recursive mutexes
       •   Readers/write locks
       •   Semaphores
       •   File locks
       •   Null locks
     Strategized Locking, Continued
• Pluggable parameterized types
  – Allow for easy configuration of different locking
    strategies
  – Don‟t require new types to fit into lock inheritance
    hierarchy
     • Must simply model the appropriate lock concept
     • E.g., by all providing the same interface
  – Notice use of typedefs to encapsulate locking
    strategy
     • Could also expose them as traits…
  – Single Threaded
     • Typedef File_Cache<Null_Mutex> Content_Cache
     Strategized Locking, Continued

• Pluggable parameterized types, continued
  – Multi-threaded using a thread mutex
    • Typedef File_Cache<Thread_Mutex> Content_Cache
  – Multi-threaded file cache using a readers/writer lock
    • Typedef File_Cache<RW_Lock> Content_Cache
  – Multi-threaded file cache using a C++ compiler
    supporting default template parameters
    • Typedef File_Cache<> Content_Cache
  Strategized Locking Known Uses
• ACE
  – Used extensively throughout ACE
  – E.g., ACE_Hash_Map_Manager
     • Synchronization aspects strategized via parameterized types
• Dynix/PTX
  – Operating system applies locking strategies in its kernel
• ATL Wizards
  – Microsoft‟s ATL Wizard in Visual Studio
  – Uses parameterized type style of Strategized Locking
  Strategized Locking Consequences
• Benefits
   – Enhanced flexibility and customization
   – Decreased maintenance effort for components
   – Improved reuse
• Liabilities
   – Obtrusive Locking
      • Code “fingerprint” is unavoidable, may take up actual space
      • AOP and/or optimized compilers may solve these problems
   – Over-engineering
      • May provide much more flexibility than is actually needed
      • Ok if you‟re developing a more comprehensive library like ACE
      • Maybe not if you‟re building “just enough” library for specific needs
Double-Checked Locking Optimization
• Intent
  – Reduce contention and synchronization overhead whenever
    a critical section of code must acquire locks
• Problem
  – How to avoid race conditions, with multiple threads?
• Context
  – An application with some shared resource(s)
  – Resource(s) can be accessed by two or more threads
• Examples
  – Execute a particular block of code only once at run-time
     • E.g., initialization of a subsystem at application start-up
  – Singleton
     • Maintain a single, globally accessible instance of a class
   Double-Checked Locking Optimization,
               Continued
• Singleton: Draft 1
                                   • Design Forces
   – From GoF text
                                         – Pre-emptive calls to
class Singleton {                          instance()
public:
    static Singleton *instance () {
                                         – Multiple threads initialize
      if (instance_ == 0) {                dynamic memory inside
         instance_ = new Singleton ();     critical section
         }
         return instance_;               – Can lead to memory leaks
private:
     static Singleton *instance_;
                                         – Even worse, can result in
};                                         program inconsistency
                                            • E.g., lost writes
// Static initialized in source file
Singleton *Singleton::instance_ = 0;        • E.g., multiple instances



How could this go wrong?
   Double-Checked Locking Optimization,
               Continued
• Singleton: Draft 2                   • Design Forces
   – Uses Scoped Locking                   – We get thread-safety
class Singleton {                          – But also high overhead
public:                                    – Unnecessary calls to
    static Singleton *instance () {
                                             acquire/release
         Guard<Thread_Mutex>
           guard (singleton_lock_);           • After initialization
         if (instance_ == 0) {
           instance_ = new Singleton ();
         }
         return instance_;
private:
     static Singleton *instance_;
};


Why is this not the best idea?
   Double-Checked Locking Optimization,
               Continued
• Singleton: Draft 3
   – Move Guard inside conditional check

class Singleton {                        • Design Forces
public:
   static Singleton *instance () {
                                           – Race condition hazard
      if (instance_ == 0) {                   • Two threads get “true” on
         Guard<Thread_Mutex>                    check
           guard (singletone_lock_);          • 1st thread doesn‟t block,
         instance_ = new Singleton ();          initializes Singleton
      }
                                              • 2nd thread blocks, then
      return instance_;
                                                re-initializes Singleton
private:
   static Singleton *instance_;
};


Why is this not correct?
Double-Checked Locking Optimization,
            Continued
 • Solution: Double-Check
    – Isolate critical code
    – Use locks to serialize access (scoped locking)
    – Double-check: before and after acquiring lock

 class Singleton {
 public:
    static Singleton *instance () {
        if (instance_ == 0) {
          Guard<Thread_Mutex>
            guard (singletone_lock_);
          if (instance_ == 0)
            instance_ = new Singleton ();
        }
       return instance_;
 private:
    static Singleton *instance_;
 };
   Double-Checked Locking Optimization,
               Continued
• Variants
   – Template Adapter
      • Support a set of classes with Singleton-like behavior
   – Pre-initialization
      • Initialize all objects at start-up
• Benefits
   – Minimized locking overhead
   – Prevents race conditions
• Liabilities
   – Possibly non-atomic pointer/integral assignment semantics
   – Compiler optimization may cache 1st check, ignore 2nd
      • Similar issues with multi-processor cache coherency
   – Small additional mutex overhead
                  Review
• What are the key differences between
  applications designed for a single thread
  versus multiple threads?
• What types of problems can arise?
• What is meant by “safety”? By “liveness”?
• How do the 4 synchronization patterns we‟ve
  covered this week help with these problems?
• Which patterns apply in which contexts?

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:7
posted:12/12/2011
language:English
pages:40