Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

PowerPoint Presentation by xakJDHF

VIEWS: 0 PAGES: 61

									  Chapter 6:
   Process
Synchronization
                  1
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
                                                        What is the output?
void *runner(void *param); // the child thread

int main (int argc, char *argv[]) {
    pthread_t tid; // thread ID
    pthread_attr_t attr; // set of thread attributes

                 def
      // get the ault attributes
      if (pthread_attr_init(&attr) != 0) {
                                                       \n " ) ;
          fprintf(stderr, "Error: Could not get thread attributes!
          exit(1);
      }

      // create the thread
      pthread_create(&tid, &attr, runner, NULL);

                   is
      printf("this some output from ");
      fflush(stdout);
      printf("the parent thr");
                          \n e a d
                                                                    2 separate runs of the
}
      fflush(stdout);                                               same program:
void *runner(void *param) {                         this is some output   from dolphins are large brained mammals
                                                      " ;
    p r i n t f ( " d o l p h i n s a r e l a r g e the)parent thread
    fflush(stdout);
    printf("brained mammals            \n " ) ;
    fflush(stdout);

      pthread_exit ;
               (0)
                                                 this is some output from the parent thread
}
                                                                                                           2
           Issue: Race Condition
• Definition
   – When different computational results (e.g., output, values
     of variables) occur depending on the particular timing and
     resulting order of execution of statements across separate
     threads or processes
• General solution strategy
   – Need to ensure that only one process or thread is allowed to
     change a variable or I/O device until that process has
     completed its required sequence of operations
   – In general, a thread needs to perform some sequence of
     operations on an I/O device or data structure to leave it in a
     consistent state, before the next thread can access the I/O
     device or data structure
                                                               3
                                Therac-25
• Between June 1985 and January 1987, some
  cancer patients being treated with radiation were
  injured & killed due to faulty software
   – Massive overdoses to 6 patients, killing 3
• Software had a race condition associated with
  command screen
• Software was improperly synchronized!!
• See also
    p. 340-341 Quinn (2006), Ethics for the Information Age

    Nancy G. Leveson, Clark S. Turner, "An Investigation of the Therac-25 Accidents,"
     Computer, vol. 26, no. 7, pp. 18-41, July 1993
         http://doi.ieeecomputersociety.org/10.1109/MC.1993.274940


                                                                                    4
             Chapter 6: Process
              Synchronization
•   Critical Section (CS) problem
•   Partial solutions to CS problem
•   Busy waiting
•   Semaphores
•   Atomicity
    – Hardware methods for atomicity
    – Multiple CPU systems
• Classic problems
    – Bounded buffer, producer-consumer, readers-writers,
      dining philosophers
• Monitors                                                  5
      Critical Section Problem

              do {

                 entry section

                     critical section

                 exit section

                     remainder section

              } while (TRUE);



Figure 6.1: General structure of a typical process Pi   6
                Critical Section
• N processes (P1, P2, …, PN) are accessing
  shared data (e.g., RAM, or shared files).
   – Access typically involves reads and writes to data
• The program code where shared data is accessed
  in a process is the critical section of that process
• General Goal: ensure that only one of P1, P2, …, PN
  are in their critical sections at any particular time
   – Can loosen this requirement at times if processes are only
     reading the data
• Main requirements for entry/exit methods (see
  p. 220 of text)
   1) Provide mutual exclusion
   2) Allow progress
   3) Have bounded waiting
 Main requirements for entry/exit
  methods (see p. 228 of text)
• mutual exclusion
   – If process Pi is executing in its critical section, then no other
     process can be executing in its critical sections
• progress
   – If no process is executing in its critical section and some processes
     wish to enter their critical sections, then only those processes that
     are not executing in their remainder sections can participate in
     deciding which will enter its critical section next, and this selection
     cannot be postponed indefinitely
• bounded waiting
   – There exists a bound, or limit, on the number of times that other
     processes are allowed to enter their critical sections after a process
     has made a request to enter its critical section and before that
     request is granted                                                     8
     How do we solve the critical
         section problem?
• Look at two partial solutions, then a “full” solution
• Trying to solve this problem “from scratch”
   – Not utilizing system calls in solutions here
   – Just trying to use our usual programming concepts to
     solve this problem
• Assume assignment statements are completed
  atomically
• Consider main requirements:
   – 1) mutual exclusion; 2) progress; 3) bounded waiting
• Need to also look at other possible problems for any
  particular solution
• We’ll introduce more general, typical, techniques 9
  afterwards
Partial Solution 1 to Critical Section Problem
           // global variable
           // initialized before threads start
           int turn = 0; // number of thread that can enter CS next

// thread 0                     // thread 1
do {                            do {
    while (turn != 0) {}            while (turn != 1) {}
    // c. s.                        // c. s.
    turn = 1;                       turn = 0;
    // remainder code               // remainder code
} while (true);                 } while (true);

                                                               10
 Do we have mutual exclusion? Progress? Bounded waiting?
Partial Solution 2 to Critical Section Problem
              // initialization
              // flag[i] set if thread i wants access to CS
              bool flag[2] = {false, false};

// thread 0                    // thread 1
do {                           do {
    flag[0] = true;                flag[1] = true;
    while (flag[1]) {}             while (flag[0]) {}
    // c. s.                       // c. s.
    flag[0] = false;               flag[1] = false;
    // other code                  // other code
} while (true);                } while (true);
                                                              11
 Do we have mutual exclusion? Progress? Bounded waiting?
A Full Solution to             // initialization
Critical Section               // flag[i] set if thread i wants access to CS
                               bool flag[2] = {false, false};
Problem (p. 230)                // number of thread that can enter CS next
                               int turn = 1;
// thread 0                           // thread 1
do {                                  do {
    flag[0] = true;                       flag[1] = true;
    turn = 1;                             turn = 0;
   while (flag[1] && turn == 1) {}       while (flag[0] && turn == 0) {}
   // c. s.                              // c. s.
   flag[0] = false;
                                         flag[1] = false;
   // other code
                                         // other code
} while (true);
                                      } while (true);
                                                                           12
    Do we have mutual exclusion? Progress? Bounded waiting?
               In Summary
• We have a solution to the critical section
  problem
• However, there are more general, typical
  solutions




                                               13
Issues with this critical section solution
  • Complex code
    – It’s hard to figure out if the code is correct
  • Busy waiting
    – Busy wait (aka. polling): In a loop,
      continuously checking the value of variables
    – These processes are doing a busy wait until
      allowed into critical section
    – Busy waiting is not the same as a process being
      blocked. Why?
    – Threads actively take CPU cycles when waiting
      for other threads to exit from critical section
Another Busy Waiting Example




                               15
Busy Wait Loop


                 “terminated” is
                 not used
                 correctly here;
                 this is not thread
                 cancellation!




                            16
  Complex Code: Think About Critical
 Section (CS) Problem More Abstractly
• We want two
  operations:                     do {
enter()                             enter();
   – perform operations
     needed so only current         // CS
     thread can access CS
                                    exit();
exit()
   – Perform operations to          // other code
     enable other threads to
     access CS                    } while (true);

Further Issue: What if we have multiple processes trying to   17
enter? Or what if we have multiple critical sections?
                Introduce a Parameter

enter(s)                              do {
   – perform operations                 enter(s);
     needed so only current
     thread can access CS               // CS
exit(s)                                 exit(s);
   – Perform operations to
     enable other threads to            // other code
     access CS
                                      } while (true);

          (traditionally parameter is called a “semaphore”)   18
     Semaphores: With Busy Waiting
    • Traditionally
       enter() is called “P” (or “wait”)
       exit() is called “V” (or “signal”)
    • Data type of the parameter is called semaphore

    semaphore (int value);
       – Semaphore constructor
       – Integer value
           • number of processes that can enter critical section without
             waiting; often initialized to 1.


void P(semaphore *S);                 void V(semaphore *S);
  while (S->value <= 0) {};               Increment S->value;
  Decrement S->value;
                                                                           19
   Often not a practical implementation: Uses a busy wait
         Need ‘Atomic’ Operations
• Need some level of indivisible or “atomic”
  sequences of operations
   – E.g., when an operation runs, it completes without
     another process executing the same code, or the same
     critical section
• In P, we must test & decrement value of
  semaphore in one step
   – e.g., with semaphore value of 1, don’t want two
     processes to test value of S, find that it is > 0, and both
     decrement and value of S
• Plus, increment operation must be atomic
                                                                   20
Critical Section Solution with (Busy
          Wait) Semaphores
     semaphore mutex = new semaphore(1); // initialization

// thread 1                     // thread 2
                                                    Does not
do {                            do {                have
                                                    bounded
   P(mutex);                       P(mutex);        waiting;
   // c. s.                        // c. s.         there is a
                                                    race
   V(mutex);                       V(mutex);        condition
   // other code                   // other code
} while (true);                 } while (true);
                                                             21
 Do we have mutual exclusion? Progress? Bounded waiting?
                 Second Issue
• Busy waiting
• Threads actively take CPU cycles when using the
  wait operation to gain access to the critical section
• This type of semaphore is called a spin lock
  because the process spins (continually uses CPU)
  while waiting for the semaphore (the “lock”)
• How can we solve this? That is, how can we not
  use a busy wait to implement semaphores?

                                                      22
 Semaphores: Typical Implementation
   • Add a queue of waiting processes to the data
     structure for each semaphore
   semaphore (int value);
       – Semaphore constructor
       – Integer value
           • number of processes that can enter critical section without
             waiting; often initialized to 1.
       – Data structure includes a queue of waiting processes


void P(semaphore *S);                  void V(semaphore *S);
  Decrement S->value                       Increment S->value
  If S->value < 0, then                    If S->value <= 0, then
      blocks calling process on S->queue        Wake up a process blocked on S->queue


  Semaphore operations (P & V) must be executed atomically
                    Example
• Construct a queue (FIFO) data structure that
  can be used by two threads to access the
  queue data in a synchronized manner
• Code this in C++ with Semaphores as your
  synchronization mechanism
  – i.e., assume you have a Semaphore class, with P
    and V operations
• Use STL queue as your data structure
  – It has methods: front(), back(), push(), pop(),
    size()
                                                      24
26
27
28
          Running Program
• Use the terminal window (Unix) command
  called “top” to look at the amount of RAM
  memory usage over time
• The amount of RAM continuously increases
  over time



                                          29
       Resolving the Problem
• There is a race condition
• Because the consumer prints, this slows the
  consumer down
• The producer thus fills up the linked list
  more rapidly than the consumer takes items
  from the linked list
• A reasonable solution is to bound (limit) the
  number of items that can be in the list
• Exercise: Try to solve this using
  semaphores                                  32
         Can solve this with a
        “counting” semaphore
• Initialize a semaphore to the number of
  elements in the list
  – keep track of the remaining slots to be filled in
    the list
• Call P on Add on this counting semaphore
  – when the list has no remaining slots, Add will
    block
• Call V on Remove on this counting
  semaphore
                                                        33
     Hardware Methods for Atomic
         Instruction Sequences
• Remember that semaphore P & V must be executed
  atomically
   – But how can we do this?
• Single CPU system
   – Turn off interrupts for brief periods
   – Not a general solution for critical sections
   – But, can be used to implement short critical sections (e.g., P
     & V implementation of semaphores)
   – Why is this OK only for short critical sections?
• May not be suitable for a multiple CPU system
   – May have to send message to all other CPU’s to indicate
     interrupts are turned off, which may be time consuming
   – Goal of only turning off interrupts for brief periods will
     likely be violated
             Multiple CPU systems
• Typically use TestAndSet or Swap instructions
   – Enable process to know that it (not another process) changed a
     variable from false to true
   – Multiprocessors often implement these instructions
• Swap(&A, &B):
   – Atomically swap values of variables A & B
• TestAndSet(&lock):
   – Atomically returns current value of lock and changes it to true
• If two of these instructions start to execute simultaneously
  on different CPU’s, they will execute sequentially in
  some order; across CPU’s, instruction is atomic
• Use of these instructions for critical sections requires a
  busy wait: not a general solution for critical sections
• But, can be used to implement atomic P/V for semaphores
Mutual-Exclusion with Swap (Fig. 6.7)
  // lock initialization occurs once, across all processes
  boolean lock = false; // variable shared: amongst processes & CPU’s
  boolean key; // variable not shared: access by this thread only
  // In the following, at most 1 process will have key == false.
  // If the process has key == false, then it can access the CS,
  // Otherwise it busy waits until key == false.
  do {
       key = true;
                                            The question being asked
       while (key == true) {                is: Am I the process that
          Swap(lock, key)                   changed lock to true from
       }                                    false?
      // Critical Section goes here
      lock = false; // assuming atomic assignment across processors
      // Remainder section
  } while (1);                          Can be used to provide mutual
                                        exclusion in the CS of a semaphore
           Mutual-Exclusion with
           TestAndSet (Fig. 6.5)
// lock initialization occurs once, across all processes
boolean lock = false; // shared amongst processes
// Process that succeeds in changing lock from false to
// true gains access to the CS.
// The others busy wait until lock returns to false.
do {
   // TestAndSet returns current value of lock & changes it to true
   while (TestAndSet(&lock)) {}
   // Critical section goes here                Can be used to
   lock = false; // assumed atomic              provide mutual
   // Remainder section                         exclusion in the CS
                                                of a semaphore
} while (1);                                                          38
More Synchronization Examples
• Semaphores
  – Bounded buffer
  – Readers-writers
• Dining philosophers
• Semaphores using condition variables



                                         39
Classic Problem: Bounded Buffer
• One process consuming items from a buffer
• Other process producing items into a buffer
• Need to ensure proper behavior when buffer
  is
  – Full: Blocks producer
  – Empty: Blocks consumer (improvement over
    previous solution)
• Need to provide mutually exclusive access
  to buffer (e.g., queue)
• Can solve with semaphores
• Illustrates some different uses of
  semaphores: mutual exclusion, and counting
           Bounded Buffer Problem
 // semaphores & data buffer shared across threads
 semaphore empty(BUFFER_LENGTH); // number of empty slots
 semaphore full(0); // number of full slots
 semaphore mutex(1); // mutual exclusive access to buffer

// producer thread         // consumer thread
do {                       do {
   // produce item            wait(full);
   wait(empty);               wait(mutex);
   wait(mutex);               // c. s.
   // c. s.                   // remove item from buffer
   // add item to buffer      signal(mutex);
   signal(mutex);             signal(empty);
   signal(full);              // consume item
while (true);              while (true);                   41
Classic Problem: Readers-Writers

• Multiple reader processes accessing data
  (e.g., a file)
• Single writer can be writing file
• Sometimes not just one process in critical
  section!



                                               42
                  Readers-Writers
      // data and semaphores shared across threads
      semaphore wrt(1); // 1 writer or >=1 readers
      semaphore mutex(1); // for test & change of readcount
      int readcount = 0; // number of readers
 // writer process                // reader process
                                  wait(mutex);
 wait(wrt);                       readcount++;
 // code to perform               if (readcount == 1) // first one in?
                                      wait(wrt);
 // writing
                                  signal(mutex);
 signal(wrt);                     // code to perform reading
                                  wait(mutex);
                                  readcount--;
Reader priority                   If (readcount == 0) // last reader out?
solution                              signal(wrt);
                                  signal(mutex);
More Synchronization Abstractions
• Using semaphores can be tricky!
  – E.g., the following code can produce a deadlock;
      • this is highly undesirable!

          // initialization; shared across threads
          semaphore s(1), q(1);

  // thread 1                           // thread 2
  wait(s);                              wait(q);
  wait(q);                              wait(s);
  …                                     …
  signal(s);                            signal(q);
  signal(q);                            signal(s);    44
   More Synchronization Abstraction:
              Monitors
• Automatically ensures only one thread can be active
  within monitor
   – One thread has lock on monitor; Gain entry by calling methods
   – Effectively, a synchronized class structure where only one
     thread can be accessing a method on a specific object instance at
     one time
• Condition variables
   – Use one of these for each “reason you have for waiting”
   – Enable explicit synchronization
   – wait and signal operations (different than semaphore operations)
  Monitors: Condition Variables
• Wait
   – Invoking thread is suspended until another thread calls signal on
     the same condition variable
   – Gives up lock on monitor; other threads can enter
• Signal (e.g., notify in Java)
   –   Resumes exactly one suspended process blocked on condition
   –   No effect if no process suspended
   –   Choice about which process gets to execute
   –   Resumed process regains lock on monitor when signaling method
       finishes
• Java
   – Wait and notify is with respect to an object
   – If you call wait or notify within a class you are saying this.wait or
     this.notify, and the wait or notify is with respect to this object
   – Simplified idea of condition variables
   – But, can use Condition interface (as we discussed before)
                                                                         46
        General Monitor Syntax
monitor monitor-name {
    // shared variable declarations
    method m1(…) {
        ….
    }                                 Mutual exclusion
    ….                                across methods within
    method mN(…) {                    the monitor for
        ….                            particular object
    }                                 instance; only a single
                                      thread can be
    Initialization code (…) {         executing a method on
    }                                 the object
}                                                               47
                 wait style
• Usually, do:
  while (boolean expression)
     wait()
• Not,
  if (boolean expression)
    wait


                               48
                Example
• Solve the bounded buffer problem using a
  Monitor
• With the wait and signal Monitor operations




                                            49
50
51
               Example (2)
• Now, use two conditions:
• One condition for Adding
  – Wait on this condition while the queue is full
• And one condition for Removing
  – Wait on this condition while the queue is empty




                                                     52
53
54
55
          Classic Problem - 3
• Dining-Philosophers
  – 5 philosophers, eating rice, only 5 chopsticks
  – Pick up one chopstick at a time
  – What happens if each philosopher picks up a
    chopstick and tries to get a second?




                                                     56
       Dining Philosophers Using
               Monitors
• Observation
   – A philosopher only
     eats when both        do {
     neighbors are not       dp.pickup(i);
     eating
                             // eat
• pickup(i)
   – Start to eat only       dp.putdown(i);
     when both neighbors     // think
     are not eating
                           while (true);
• putdown(i)
   – Enable neighbors to
     eat if they are
     hungry                                   57
(p. 249 of text)




            58
59
               Example
• Implement semaphores using Monitors
• There is some subtlety to the
  implementation




                                        60
61
62
63
   User Space Synchronization
  Provided by OS’s and Libraries
• Linux (current kernel version)
  – POSIX semaphores (shared memory & non-
    shared memory)
  – futex: wait for value at memory address to change
• Windows XP thread synchronization
  – provides dispatcher objects
  – mutexes (semaphores with value 1), semaphores,
    events (like condition variables), and timers can
    be used with dispatcher objects
• Pthreads
                                                   68
  – has conditions and mutexes
http://www-user.tu-chemnitz.de/~heha/oney_wdm/ch04e.htm
                                                     69

								
To top