Concurrency by sUcWG5

VIEWS: 35 PAGES: 18

									                         Concurrency
• Concurrency is the simultaneous execution of program
  code
   – instruction level – 2 or more machine instructions
     simultaneously
   – statement level – 2 or more high-level statements
     simultaneously
   – unit level – 2 or more subprograms simultaneously
   – program level – 2 or more programs simultaneously
      • instruction and program level concurrency involve no language issues so
        we won’t consider them and instead concentrate on the other two levels
      • instruction and program level concurrency typically require parallel
        processing while statement and unit level merely require multiprocessing
• What languages support concurrency? And which type?
   – how does the language handle concurrency?
            Categories of Concurrency
• Physical concurrency: program code executed in parallel on
  multiple processors
• Logical concurrency: program code executed in an interleaved
  fashion on a single processor with the OS or language responsible
  for the switching from one piece of code to another
   – to a programmer, physical and logical concurrency are the same, the
     language implementor must map logical concurrency onto the underlying
     hardware
   – thread of control is the path through the code taken, that is, the sequence of
     program points reached
• Unit-level logical concurrency is implemented through the co-
  routine
   – a form of subroutine but with a different behavior from previous
     subroutines
   – one coroutine executes at a time, like normal subroutines, but in an
     interleaved fashion rather than a LIFO fashion
   – a coroutine can interrupt itself to start up another coroutine
       • for instance, a called function may stop to let the calling function execute, and
         then return to the same spot in the called function later
     Subprogram-Level Concurrency
• Task – a program unit that can execute concurrently with
  another program unit
   – tasks are unlike subroutines because
      • they may be implicitly started rather than explicitly as with subprograms
      • a unit that invokes a task does not have to wait for the task to complete
        but may continue executing
      • control may or may not return to the calling task
   – tasks may communicate through
      • shared memory (non-local variables), parameters, or through message
        passing
   – PL/I was the first language to offer subprogram-level
     concurrency via “call task” and “wait(event)”
     instructions
      • programmers can specify the priority of each task so that waiting
        routines are prioritized
      • wait is used to force a routine to wait the routine it is waiting on
        has finished
• Disjoint tasks – tasks which do not affect each other or
  have any common memory
   – most tasks are not disjoint but instead share information
              Example: Card Game
• Four players, each using the same strategy
• The card game is simulated as follows:
   – master unit creates 4 coroutines
   – master unit initializes each coroutine such that each starts with
     a different set of cards (perhaps an array randomly generated)
• Master unit selects one of the 4 coroutines to start the
  game and resumes it
   – the coroutine runs its routine to decide what it will play and
     then resumes the next coroutine in order
   – after the 4th coroutine executes its “play”, it resumes the 1st one
     to start the next turn
      • notice that this is not like a normal function call where the caller is
        resumed once the function terminates!
   – this continues until one coroutine wins, at which point the
     coroutine returns control to the master unit
      • notice here that transfer of control is always progressing through each
        hand, this is only one form of concurrent control
                         More on Tasks
• A heavyweight task executes in its own address space with its
  own run-time stack
• Lightweight tasks share the same address space and run-time
  stack with each other
    – the lightweight task is often called a thread
• Non-disjoint tasks must communicate with each other
    – this requires synchronization:
         • cooperating synchronization (consumer-producer relationship)
         • competitive synchronization (access to a shared critical section)
   – synchronization
     methods:
        • monitors
        • semaphores
        • message passing
Without synchronization, shared
data can become corrupt – here,
TOTAL should be 8 (if A fetches
TOTAL before B) or7
             Liveness and Deadlock
• These are the two states for any concurrent task (or multitasking)
• A process which is not making progress toward completion may
  be in a state of:
   – deadlock – a process is holding resources other processes need while those
     processes are holding resources this process needs
   – starvation – a process continually is unable to access the resource because
     others get to it first
• Liveness is when a task is in a state that will eventually allow it to
  complete execution and terminate
   – meaning that the task is not suffering from and will not suffer from either
     deadlock or starvation
   – without a fair selection mechanism for the next concurrent task, a process
     could easily wind up starving, and without adequate OS protection, a
     process could wind up in deadlock
       • these issues are studied in operating systems and so we won’t discuss them in
         much more detail in this chapter
• Note that without concurrency or multitasking, deadlock cannot
  arise and starvation should not arise
   – unless resources are unavailable (off-line, not functioning)
        Design Issues for Concurrency
• Unit-level concurrency is supported by synchronization
    – two forms: competitive and cooperative
• How is synchronization implemented?
    –   semaphores
    –   monitors
    –   message passing
    –   threads
• How and when do tasks begin and end execution?
• How and when are tasks created (statically or dynamically)?
• Synchronization guards over the execution of coroutines
    – coroutines have these features
         •   multiple entry points
         •   a means to maintain status information when inactive
         •   invoked by a “resume” instruction (rather than a call)
         •   may have “initialization code” which is only executed when the coroutine is created
    – typically, all coroutines are created by a non-coroutine program unit often called a
      master unit
    – if a coroutine reaches end of its code, control is transferred to master unit
         • otherwise, control is determined by a coroutine resuming another one
                         Semaphores                    void wait(semaphore s)
• A data structure that provides                       {
                                                          if (s.value > 0)
  synchronization through mutually                            s.value--;
  exclusive access                                        else place the calling
                                                                 process in a wait
   – a semaphore typically just stores an int
                                                                 queue, s.queue
     value: 1 means that the shared resource           }
     is available, 0 means it is unavailable
   – the semaphore uses two processes:                 void release(semaphore s)
                                                       {
     wait and release                                     if s.queue is not empty,
      • when a process needs to access the shared                wake up first
        resource, it executes wait                               process
      • when the process is done with the shared          else s.value++;
        resource, it executes resume                   }
   – for the semaphore to work, wait and
     release cannot be interrupted (e.g., via        A simpler form of the
                                                     semaphore uses a while loop
     multitasking)                                   instead of a queue, that is,
      • so wait and release are often implemented    the process stays in a while
        in the machine’s instruction set as atomic   loop while s.value <= 0
        instructions
                       Using the Semaphore
semaphore full, empty;                                semaphore access, full, empty;
full.value = 0;                                       access.value = 1; full.value = 0;
empty.value = 0;       Producer-consumer code         empty.value = BUFFER_LENGTH;

//producer code:        On the left, consumer         while(1) {
 while(1)               must wait until                   // produce value
 {                      producer produces                 wait(empty);
    // produce value    (cooperative synch)               wait(access);
    wait(empty);                                          insert(value);
    insert(value);      On the right, as long as          release(access);
    release(full);      a product is available,           release(full);
 }                      any consumer can              }
                        consume it and any
//consumer code:        producer can produce          while(1) {
 while(1)               if there is room in the           wait(full);
 {                      buffer (competitive synch)        wait(access);
    wait(full);                                           retrieve(value);
    retrieve(value);    NOTE: access ensures that         release(access);
    release(empty);     two producers or two              release(empty);
    // consume value    consumers are not accessing       // consume value
}                       the buffer at the same time   }
         Evaluation of Semaphores
• Binary semaphores were first implemented in PL/I which
  was the first language to offer concurrency
   – the binary semaphore version had no queue such that a waiting
     coroutine may not gain access in a timely fashion
      • in fact, there is no guarantee that a coroutine would not starve
      • so the semaphore’s use in PL/I was limited
   – ALGOL 68 offered compound-level concurrency and had a
     built-in data type called sema (semaphore)
• Unfortunately, semaphore use can lead to disasterous
  results if not checked carefully
   – misuse can lead to deadlock or can permit non-mutual exclusive
     access (as shown in the previous slide’s notes page)
      • there is no way to, in general, check semaphore usage to ensure
        correctness, so, by offering built-in semaphores in a language, it does not
        necessarily help matters
   – instead, we will now turn to a better synchronization construct
      • the monitor
                           Monitors
• Introduced in Concurrent                     type monitor_name = monitor(params)
  Pascal (1975) and later used in                --- declaration of shared vars ---
                                                 --- definitions of local procedures ---
  Modula and other languages                     --- definitions of exported
   – concurrent Pascal is Pascal +                        procedures ---
     classes (Simula 67), tasks (for             --- initialization code ---
                                              end
     concurrency) and monitors
   – the general form of the
     Concurrent Pascal monitor is                  Exported procedures are those
     given to the right                            that can be referenced
      • the monitor is an encapsulated data        externally, e.g., the public
        type (ADT) but one that allows             portion of the monitor
        shared access to its data structure
        through synchronization                    Because the monitor is
                                                   implemented in the language
      • one can use the monitor to                 itself as a subprogram type,
        implement cooperative or                   there is no way to misuse it
        competitive synchronization without        like you could semaphores
        semaphores
 Competitive and Cooperative Synchronization
• Access to the shared         Here, different tasks communicate
                               through a shared buffer, inserting and
  data of a monitor is         removing data to the buffer
  automatically
  synchronized through         Through the use of continue(process),
                               one process can alert another as to when
  the monitor                  the datum is ready
  – competitive
    synchronization needs
    no further mechanisms
  – cooperative
    synchronization
    requires further
    communication so that
    one task can alert
    another task to continue
    once it has performed
    its operation
                  Message Passing
• While the monitor approach is safer than semaphores,
  it does not work if we are dealing with a concurrent
  system with distributed memory
   – message passing will solve this problem
      • message passing uses a transmission from one process to another
        called a rendezvous
   – we can use either synchronous message passing or
     asynchronous message passing
      • both approaches have been implemented in Ada
          – synchronous message passing in Ada 83
          – asynchronous message passing in Ada 95
   – both of these are rather complex, so we are going to skip it,
     the message passing model is one seen in OOP, so you
     should be familiar with the idea even though you might not
     understand the implementation
      • you can explore the details on pages 591-598 if you wish
                                     Threads
• The concurrent unit in Java and C# is the thread
    – lightweight tasks (as opposed to Ada’s heavyweight tasks)
    – a thread is code that shares address and stack space but each thread has its own
      private data space
• In Java threads, threads have at least two methods
    – run: describes the actions of the thread
    – start: starts the thread as a concurrent unit but control also resumes to the caller
      which continues executing (sort of like a fork in Unix)
• All Java classes are implemented as single threads
    – to define your own threaded class, you have to extend the Thread class
• If the program has multiple threads, a scheduler must manage which thread
  should be run at any given time
    – different OSs schedule threads in different ways, so executing threads is somewhat
      unpredictable
• Additional thread methods include:
    –   yield (voluntarily stop itself)
    –   sleep (block itself for a given number of milliseconds)
    –   join (call another thread until it terminates)
    –   stop, suspend and resume
     Synchronization With Threads
• Competitive                   • Cooperative synchronization
  synchronization is              uses methods wait, notify and
  implemented by                  notifyAll of the Object class
   – specifying that one       – notify will tell a given thread
     method’s code must run      that an event has occurred,
     completely before a         notifyAll notifies all threads in
     competitor’s runs           an object’s wait list
   – this can be done by using – wait puts a thread to sleep until
     the synchronized modifier   it is notified
      • see the code to the right
        where the two methods are      class ManageBuffer {
        synchronized for access to        private int[100] buf;
        buf                               …
                                          public synchronized void
                                                 deposit(int item) {…}
                                          public synchronized int fetch( ) {…}
                                          …
                                       }
                              Partial Example
class Queue {                                         public synchronized int fetch( ) {
  private int[ ] queue;                                   int item = 0;
  private int nextIn, nextOut, filled, size;              try {
                                                                 while(filled = = 0) wait( );
    // constructor omitted                                       item = queue[nextOut];
                                                                 nextOut = (nextOut % size) + 1;
    public synchronized void deposit(int item) {                 filled --;
       try {                                                     notifyAll( );
             while(filled = = size) wait( );             }
            queue[nextIn] = item;                        catch(InterruptedExecution e) {}
            nextIn = (nextIn % size) + 1;                return item;
            filled++;                                  }
            notifyAll( );                            } // end class
       }
      catch(InterruptedExecution e) {}
}
            We would create Producer and Consumer classes that extend Thread and
            contain the Queue as a shared data structure (create a single Queue object and
            use it when initializing both the Producer object and the Consumer object)

            See pages 603-606 for the full example
                          C# Threads
• A modest improvement over Java threads
   – any method can run in its own thread by creating a Thread
     object
   – Thread objects are instantiated with a previously defined
     ThreadStart class which is passed the method that implements
     the action of the new Thread object
      • C#, like Java, has methods for start, join, sleep, and includes an abort
        method that makes termination of threads superior than in Java
• In C#, threads can be synchronized by
   – being placed inside a Monitor class (for creating an ADT with
     a critical section as per Pascal)
   – being placed inside an Interlock class (this is used only when
     synchronizing access to a datum that is to be incr/decr)
   – using the lock statement (to mark access to a critical section
     inside a thread)
• C# threads are not as flexible as Ada threads, but are
  simpler to use/implement
      Statement-Level Concurrency
• From a language designer’s viewpoint, it is important to have
  constructs that allow a programmer to identify to a compiler how
  a program can be mapped onto a multiprocessor
   – there are many forms ways to parallelize code, the book only refers to
     methods for a SIMD architecture
• High-Performance FORTRAN is an extension to FORTRAN 90
  that allows programmers to specify statement-level concurrency
   – PROCESSORS is a specification that describes the number of processors
     available for the program. This is used with other specifications to tell the
     compiler how data is to be distributed
   – DISTRIBUTE statement specifies what data is to be distributed (e.g. an
     array)
   – ALIGN statement relates the distribution of arrays with each other
   – FORALL statement lists those statements that can be executed
     concurrently
       • details can be found on pages 609-610
• Other languages are available to implement statement-level
  concurrency such as Cn (a variation of C), Parallaxis-III (a
  variation of Modula-2) and Vector Pascal

								
To top