Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Concurrency by gegeshandong

VIEWS: 13 PAGES: 23

									Chapter 12

Concurrency
                Concurrency
• Simultaneous execution of program code
  – Instruction Level - 2 or more instructions
    simultaneously
  – Statement Level - 2 or more statements
    simultaneously
  – Unit Level - 2 or more subprograms simultaneously
  – Program Level - 2 or more programs
    simultaneously
• We will focus on the issues involved in
  implementing PLs with Unit and Statement
  level concurrency
               Multiprocessors
• Issues for Computer Architecture courses
  however, we identify two different types of
  multiprocessor architectures: SIMD and MIMD
• We are not necessarily interested in these
  architecture although it should be noted that
  concurrency can be carried out on a
  multiprocessor or a serial processor using
  multitasking or time sharing
• We are also not necessarily interested in parallel
  programming issues other than identifying those
  issues that require that PL handle them
         Symmetric Concurrency
• Sometimes called quasi-concurrency
• Program units (coroutines) execute in an
  intertwined fashion in cooperation to solve the
  problem
• Only one coroutine executes at a time
• A coroutine can interrupt itself to start up
  another coroutine
  – for instance, a called procedure may stop to let the
    calling procedure execute, and then return to the
    same spot in the called procedure later
          Physical Concurrency
• Most general category of concurrency
• Subroutines execute simultaneously
• This may take place on multiple processors, or
  in an interleaved fashion on a single processor
  (known as logical concurrency)
• From a programmer’s or language designer’s
  point of view, physical and logical concurrency
  are the same although the language implementor
  must map the logical concurrency onto the
  underlying hardware
                 Threads
• The sequence of points in the program that
  are reached through the control flow
• Quasi-concurrent programs have a single
  thread of control
• Physically concurrent programs have
  multiple threads, each processor with its
  own thread
      Why study Concurrency
• Provides a method for conceptualizing
  solutions to problems (a task-level or top-
  down design perspective). Many problems
  and domains lend themselves to
  concurrency (e.g. simulations, AI problems,
  vector problems, etc…)
• Multiprocessor systems are becoming
  widely used and available
 Subprogram-Level Concurrency
• Task - program unit that can be executed
  concurrently with other program units
• Tasks are unlike subroutines because
  – they are implicitly started rather than explicitly
  – a unit that invokes a task does not have to wait
    for the task to complete but may continue
    executing
• Tasks may communicate through shared
  memory (non-local variables), message
  passing or parameters
             More on Tasks
• Disjoint tasks - tasks which do not affect
  each other or have any common memory
• Most tasks are not disjoint but instead share
  information. This requires synchronization:
  – Cooperating synchronization (as in a consumer-
    producer relationship)
  – Competitive synchronization (as in access to a
    critical section)
• Three methods of synchronization:
  Semaphores, Monitors, Message Passing
       Liveness and Deadlock
• Two problems with synchronization (whether
  it is for concurrency or multitasking in
  general):
  – Deadlock - when processes hold resources that
    each need
  – Starvation - when a process is constantly unable
    to access the resource because others get to it
    first
• Liveness - the task is in a state that will
  eventually allow it to complete execution and
  terminate (in OS, this is called Progress)
                Coroutines
• Symmetrically concurrent program units
  (only one executes at a time)
• Have multiple entry points
• Have a means to maintain status information
  when the coroutine is inactive
• Are invoked by a “resume” instruction
  (rather than a call)
• May have “initialization code” which is only
  executed when the coroutine is created (e.g.
  compiled or first resumed)
       Coroutine Execution Flow
• Typically, all coroutines are created by non-
  coroutine program units (often called master units)
• When created, all coroutines initializes themselves
• Well all coroutines are ready, master unit resumes
  one of the coroutines
• If a coroutine reaches the end of its code, control
  is transferred to the master unit
• Otherwise, control is determined by a coroutine
  resuming another one or returning to the master
  unit until the program reaches termination
        Example: Card Game
• Four players, each using the same strategy
• Game can be simulated with a master unit
  which creates 4 coroutines, initializes them
  to a randomly generated selection of cards
• Simulation is performed by resuming one
  coroutine, which after its turn ends, resumes
  the next coroutine in a round robin fashion
  until the game ends
       Coroutines in Modula-2
• Modula-2 has no explicit commands or data
  types for either concurrency
• However, many Modula-2 compilers can
  create coroutines by the following mechanism:
  – Low-level module named SYSTEM available in
    the compiler that provides data types and
    procedures for creating, starting and resuming
    coroutines
  – High-level module named PROCESSES available
    in the compiler that uses SYSTEM to provide data
    types and procedures for dealing with coroutines
   More on Modula-2 Coroutines
• Coroutines in Modula-2 are called processes
• Parameterless created using NEWPROCESS
  procedure from the SYSTEM module
• Can be called using TRANSFER procedure
  from SYSTEM
• Coroutines are statically created as with all
  M-2 procedures
• See page 441, example for multiposition
  buffer, page 443-4 that uses the buffer and
  page 445 for consumer-producer code
   Design Issues for Concurrency
• Concurrency must be supported by
  synchronization (competitive and cooperative)
  – How are these two types of synchronization
    supported?
     • Semaphores
     • Monitors
     • Message Passing
  – How and when do tasks begin and end execution?
  – Are tasks statically or dynamically created?
                  Semaphores
• Typically a binary value which guards against
  multiple processes entering a critical section
  – If process P wants to enter CS, it attempts to gain
    the semaphore. If available, P sets semaphore to
    unavailable and enters CS. Upon exiting CS, makes
    semaphore available
  – See figure 11.4, page 448 and example on page 449
  – Can solve the competitive synchronization problem
    but only if used properly -- if used improperly can
    either result in deadlock or no mutual exclusion
                  Monitors
• Because semaphores can be misused, a method
  providing encapsulation of the use of the
  synchronization method could get around this
  problem (I.e. the programmer need not worry
  about how the synchronization is implemented,
  just assume it is correct).
• The synchronization data structure and
  operations are encapsulated into an ADT-like
  structure called a Monitor
             Concurrent Pascal
• First PL with monitors
• Also included classes (from SIMULA 67)
• Concurrent Pascal has a form similar to
  procedures called a process (declared with a
  type statement)
• Have an init statement to provide initialization
  code
• Concurrent Pascal also has a built-in type called
  Monitor (see page 451). Example given on
  page 453-4
            Message Passing
• Monitors are dependable and safe, but only
  usable if the concurrent units share memory
• If concurrent units are distributed (e.g. in a
  network, or an MIMD multiprocessor without
  global memory) then message passing must be
  used to provide proper synchronization
• Message passing can also be used in shared
  memory situations
      Synchronization through MP
• Transmission of a message is called a rendezvous
• Message passing can implement both cooperative
  and competitive synchronization, can be
  synchronous or asynchronous
• In Ada 83, an ACCEPT clause implements the
  synchronization. A task that reaches an
  ACCEPT clause is forced to wait until another
  task sends a message to that ACCEPT clause. At
  this point, the first task can execute its ACCEPT
  clause. See page 457.
    Statement-Level Concurrency
• From a language designer’s viewpoint, it is
  important to have constructs that allow a
  programmer to identify to a compiler how a
  program can be mapped onto a multiprocessor
• There are many forms ways to parallelize code,
  the book only refers to methods for a SIMD
  architecture
      High-Performance FORTRAN
• Extensions to FORTRAN 90 that allow
  programmers to specify statement-level concurrency
  – PROCESSORS is a specification that describes the
    number of processors available for the program. This is
    used with other specifications to tell the compiler how
    data is to be distributed
  – DISTRIBUTE statement specifies what data is to be
    distributed (e.g. an array)
  – ALIGN statement relates the distribution of arrays with
    each other
  – FORALL statement lists those statements that can be
    executed concurrently

								
To top