MET TC Computer Science Concepts in Telecommunication

Document Sample
MET TC Computer Science Concepts in Telecommunication Powered By Docstoc
					       MET TC670 B1
Computer Science Concepts in
 Telecommunication Systems

           Fall 2003
 Lecture 3, September 23, 2003

       Inter-process communication

       Concurrency control and deadlock

 Inter-process Communication

       What is inter-process communication?
         Exchange data / info among (2 or more) processes,
          signaling, ……
       Why need inter-process communication?
         Processes exchange info to complete work…example
              Different processes (for different devices) coordinate
       Inter-process versus inter-thread
          Inter-process: implemented by system (system calls)
          Inter-thread may require user program to do…a lot…
       More generally
         Distributed communication system ……

       Processes coordinate in a multi-process OS
          To share resources, access shared data structures
              e.g., processes/threads accessing a memory cache in a Web server
          Also, to coordinate their execution
              e.g., a disk reader process/thread delivers a block to a network
       For correctness, we have to control this
          Must assume processes/threads interleave executions
           arbitrarily and at different rates
              scheduling is not under application writers’ control
          We control cooperation using synchronization
              enables us to restrict the interleaving of executions…example
 Shared Resources

       We’ll focus on coordinating access to shared resources
         Basic problem:
               two concurrent processes are accessing a shared variable
               if the variable is read/modified/written by both threads, then access to
                the variable must be controlled (how about read-only?)
               otherwise, unexpected results may occur
       In this lecture, we’ll look at:
          Mechanisms to control access to shared resources
               low level mechanisms like locks
               higher level mechanisms like mutexes, semaphores, monitors, and
                condition variables
           Patterns for coordinating access to shared resources
               bounded buffer, producer-consumer, …
           Detection of potential deadlocks — unavoidable results of

 The Classic Example

       Suppose we have to implement a function to
        withdraw money from a bank account:
           int withdraw(account, amount) {
               balance = get_balance(account);
               balance -= amount;
               put_balance(account, balance);
               return balance;

       Now suppose that two persons share a bank
        account with a balance of $100.00
         what happens if you both go to separate ATM machines,
          and simultaneously withdraw $10.00 from the account?
 Example Continued

       Represent the situation by creating a separate
         process for each person to do the withdrawals
           Both processes run on the same bank mainframe.
           Or on different machines, but access the same database.
 int withdraw(account, amount) {        int withdraw(account, amount) {
      balance = get_balance(account);       balance = get_balance(account);
      balance -= amount;                    balance -= amount;
      put_balance(account, balance);        put_balance(account, balance);
      return balance;                       return balance;
 }                                      }

       What’s the problem with this?
         what are the possible balance values after this runs?
 Interleaved Schedules

       The problem is that the execution of the two
        threads can be interleaved, assuming preemptive
                     balance = get_balance(account);
                     balance -= amount;
Execution sequence                                     Context switching
                     balance = get_balance(account);
 as seen by CPU
                     balance -= amount;
                     put_balance(account, balance);    Context switching
                     put_balance(account, balance);

       What’s the account balance after this sequence?
         who’s happy, the bank or you? :)
 The Crux of the Matter

       The problem is that two concurrent processes
        access a shared resource (account) without any
         creates a race condition
            output is non-deterministic, depends on timing
       We need mechanisms for controlling access to
        shared resources in the face of concurrency
         so we can reason about the operation of programs
            essentially, re-introducing determinism
       Synchronization is necessary for any shared data
         buffers, queues, lists, hash tables, …
   When are Resources Shared?

          Local variables are not shared (thread model)
            refer to data on the stack, each thread has its own stack
            never pass/share/store a pointer to a local variable on
             another thread’s stack
          Global variables are shared
            stored in the static data segment, accessible by any thread
          Dynamic objects are shared
            stored in the heap, shared if you can name it
                 in C, can conjure up the pointer
                     e.g.   void *x = (void *) 0xDEADBEEF
                 in Java, strong typing prevents this; must pass references explicitly

– 10 –
   Mutual Exclusion

          We want to use mutual exclusion to synchronize
           access to shared resources

          Code that uses mutual exclusion to synchronize its
           execution is called a critical section
            only one process/thread at a time can execute in the critical
            all others are forced to wait on entry
            when a process/thread leaves a critical section, another can

– 11 –
   Critical Section Requirements

          Critical sections have the following requirements
            mutual exclusion
                at most one process is in the critical section
             progress
                if process P is outside the critical section, then P cannot prevent
                 process Q from entering the critical section
             bounded waiting (no starvation)
                if process P is waiting on the critical section, then P will eventually
                 enter the critical section
                   assumes processes eventually leave critical sections

             performance
                the overhead of entering and exiting the critical section is small with
                 respect to the work being done within it
– 12 –
Mechanisms for Building Critical Sections

           Locks
             very primitive, minimal semantics; used to build others
           Semaphores
              basic, easy to get the hang of, hard to program with
           Monitors
             high level, requires language support, implicit operations
             easy to program with; Java “synchronized()” as example
           Messages
             simple model of communication and synchronization based
              on atomic transfer of data across a channel
             direct application to distributed systems
 – 13 –

          A lock is an object (in memory) that provides the following
            two operations:
              acquire( ): a process/thread calls this before entering a critical section
              release( ): a process/thread calls this after leaving a critical section
          Threads pair up calls to acquire( ) and release( )
             between acquire( ) and release( ), the process/thread holds the lock
             acquire( ) does not return until the caller holds the lock
                  at most one process/thread can hold a lock at a time (usually)
              so: what can happen if the calls aren’t paired?
          Two basic flavors of locks
             spinlock
             blocking (a.k.a. “mutex”)

– 14 –
       Using Locks

int withdraw(account, amount) {                  balance = get_balance(account);

    acquire(lock);                               balance -= amount;
    balance = get_balance(account);

    balance -= amount;                           put_balance(account, balance);
    put_balance(account, balance);
                                                 balance = get_balance(account);
                                                 balance -= amount;
    return balance;
                                                 put_balance(account, balance);

              Questions:
                What happens when green tries to acquire the lock?
                Why is the “return” outside the critical section? Is it ok?
    – 15 –

          How do we implement locks? Here’s one attempt:
            struct lock {
                int held = false;
            void acquire(lock) {            the caller “busy-waits”,
                 while (lock->held);        or spins for lock to be
                 lock->held = true;         released, hence spinlock
            void release(lock) {
                lock->held = false;

          Why doesn’t this work?
            where is the race condition?

– 16 –
   Implementing Locks (continued)

          Problem is that implementation of locks has
           critical sections, too!
            the acquire/release must be atomic
                atomic == executes as though it could not be interrupted
                code that executes “all or nothing”
          Need help from the hardware
            atomic instructions
                test-and-set, compare-and-swap, …
            disable/re-enable interrupts
                to prevent context switches

– 17 –
   Spinlocks Redux: Test-and-Set

          Computer provides the following as one atomic
           instruction: test_and_set(bool *flag) {
                             bool old = *flag;
                             *flag = True;
                             return old;

          So, to fix our broken spinlocks, do:
                     struct lock {
                         int held = false;
                     void acquire(lock) {
                     void release(lock) {
                         lock->held = true;
– 18 –
   Problems with Spinlocks

          Horribly wasteful!
            if a process is spinning on a lock, the process holding the
             lock cannot proceed.
          How did lock holder yield the CPU in the first
             calls yield( ) or sleep( )
             involuntary context switch
          Only want spinlocks as primitives to build higher-
           level synchronization constructs

– 19 –
   Disabling Interrupts
                                    struct lock {
          An alternative:
                                    void acquire(lock) {
                                         cli();   // disable interrupts
                                    void release(lock) {
                                        sti();    // reenable interupts

          Can two threads disable interrupts simultaneously?
          What’s wrong with interrupts?
            only available to kernel (why? how can user-level use?)
          Like spinlocks, only used to implement higher-level
            synchronization primitives

– 20 –

          semaphore = a synchronization primitive
             higher level than locks
             invented by Dijkstra in 1968, as part of the THE os
          A semaphore is:
            a variable that is manipulated atomically through two
             operations, signal and wait
            wait(semaphore): decrement, block until semaphore is open
                 also called P(), after Dutch word for test, also called down()
             signal(semaphore): increment, allow another to enter
                 also called V(), after Dutch word for increment, also called up()

– 21 –
   Blocking in Semaphores

          Each semaphore has an associated queue of
              when wait() is called by a process,
                  if semaphore is “available”, process continues
                  if semaphore is “unavailable”, process blocks, waits on queue
              signal() opens the semaphore
                  if process(s) are waiting on a queue, one process is unblocked
                  if no processs are on the queue, the signal is remembered for next time a
                   wait() is called
          In other words, semaphore has history
             this history is a counter
             if counter falls below 0 (after decrement), then the semaphore is closed
                  wait decrements counter
                  signal increments counter
– 22 –
   Hypothetical Implementation
         type semaphore = record
             value: integer:
             L: list of processes;

             S.value = S.value - 1;
             if S.value < 0
             then begin
                     add this process to S.L;
                                                        wait()/signal() are
                     block;                         critical sections! Hence,
                     end;                            they must be executed
                                                     atomically with respect
             S.value = S.value + 1;
                                                          to each other.
             if S.value <= 0
             then begin
                     remove a process P from S.L;
                     wakeup P

– 23 –
   Two Types of Semaphores

          Binary semaphore (a.k.a. mutex semaphore)
            guarantees mutually exclusive access to resource
            only one thread/process allowed entry at a time
            counter is initialized to 1
          Counting semaphore (a.k.a. counted semaphore)
            represents a resources with many units available
            allows threads/process to enter as long as more units are
            counter is initialized to N
                 N = number of units available, e.g., 2 printers

– 24 –
   Example: Bounded Buffer Problem

          AKA producer/consumer problem
            there is a buffer in memory
                 with finite size N entries
             a producer process inserts an entry into it
             a consumer process removes an entry from it
          Processes are concurrent
             so, we must use synchronization constructs to control access
              to shared variables describing buffer state

– 25 –
  Bounded Buffer using Semaphores
     var mutex: semaphore = 1      ;mutual exclusion to shared data
         empty: semaphore = n      ;count of empty buffers (all empty to start)
         full: semaphore = 0       ;count of full buffers (none full to start)

        wait(empty)       ; one fewer buffer, block if none available
        wait(mutex)       ; get access to pointers
               <add item to buffer>
        signal(mutex) ; done with pointers
        signal(full) ; note one more full buffer

          wait(full) ;wait until there’s a full buffer
          wait(mutex) ;get access to pointers
                <remove item from buffer>
          signal(mutex) ; done with pointers
          signal(empty) ; note there’s an empty buffer
– 26 –
                <use the item>
   Example: Readers/Writers

          Basic problem:
            object is shared among several processes
            some read from it
            others write to it
          We can allow multiple readers at a time
            why?
          We can only allow one writer at a time
            why?

– 27 –
   Readers/Writers using Semaphores
         var mutex: semaphore          ; controls access to readcount
             wrt: semaphore ; control entry to a writer or first reader
             readcount: integer        ; number of readers

         write process:
             wait(wrt)       ; any writers or readers?
              <perform write operation>
             signal(wrt)     ; allow others

         read process:
             wait(mutex)      ; ensure exclusion
                    readcount = readcount + 1 ; one more reader
                    if readcount = 1 then wait(wrt) ; if we’re the first, synch with writers
                    <perform reading>
             wait(mutex)      ; ensure exclusion
                    readcount = readcount - 1 ; one fewer reader
                    if readcount = 0 then signal(wrt) ; no more readers, allow a writer

– 28 –
   Readers/Writers Notes

          Note:
            the first reader blocks if there is a writer
                 any other readers will then block on mutex
             if a writer exists, last reader to exit signals waiting writer
                 can new readers get in while writer is waiting?
             when writer exits, if there is both a reader and writer
              waiting, which one goes next is up to scheduler

– 29 –
   Problems with Semaphores

          They can be used to solve any of the traditional
           synchronization problems, but:
             semaphores are essentially shared global variables
                can be accessed from anywhere (bad software engineering)
             there is no connection between the semaphore and the data
              being controlled by it
             used for both critical sections (mutual exclusion) and for
              coordination (scheduling)
             no control over their use, no guarantee of proper usage
          Thus, they are prone to bugs
            another (better?) approach: use programming language

– 30 –

          A programming language construct that supports
            controlled access to shared data
              synchronization code added by compiler, enforced at runtime
              why does this help?
          Monitor is a software module that encapsulates:
            shared data structures
            procedures that operate on the shared data
            synchronization between concurrent processes that invoke those
          Monitor protects the data from unstructured access
            guarantees only access data through procedures, hence in legitimate

– 31 –
   A Typical Monitor

                                        shared data

  waiting queue of processes
   trying to enter the monitor

                 at most one       operations (procedures)
              process in monitor
                  at a time

– 32 –
   Monitor Facilities

          Mutual exclusion
            only one process can be executing inside at any time
                 thus, synchronization implicitly associated with monitor
             if a second process tries to enter a monitor procedure, it
              blocks until the first has left the monitor
                 more restrictive than semaphores!
                 but easier to use most of the time
          Once inside, a process may discover it can’t
           continue, and may wish to sleep
             or, allow some other waiting process to continue
             condition variables provided within monitor
                 processes can wait or signal others to continue
                 condition variable can only be accessed from inside monitor
– 33 –
   Condition Variables

          A place to wait; sometimes called a rendezvous
          Three operations on condition variables
            wait(c)
               release monitor lock, so somebody else can get in
               wait for somebody else to signal condition
               thus, condition variables have wait queues
            signal(c)
               wake up at most one waiting process/thread
               if no waiting processes, signal is lost
               this is different than semaphores: no history!
            broadcast(c)
               wake up all waiting processes/threads
– 34 –
   Bounded Buffer using Monitors
         Monitor bounded_buffer {
           buffer resources[N];
           condition not_full, not_empty;

           procedure add_entry(resource x) {
             while(array “resources” is full)
             add “x” to array “resources”
           procedure get_entry(resource *x) {
             while (array “resources” is empty)
             *x = get resource from array “resources”
– 35 –     }
   Two Kinds of Monitors

          Hoare monitors: signal(c) means
            run waiter immediately
            signaler blocks immediately
                 condition guaranteed to hold when waiter runs
                 but, signaler must restore monitor invariants before signaling!
          Mesa monitors: signal(c) means
            waiter is made ready, but the signaler continues
                 waiter runs when signaler leaves monitor (or waits)
                 condition is not necessarily true when waiter runs again
             signaler need not restore invariant until it leaves the monitor
             being woken up is only a hint that something has changed
                 must recheck conditional case
– 36 –

          Hoare monitors
            if (not Ready)
                  wait(c)
          Mesa monitors
            while(not Ready)
                  wait(c)

          Mesa monitors easier to use
            more efficient
            fewer switches
            directly supports broadcast
          Hoare monitors leave less to chance
            when wake up, condition guaranteed to be what you expect
– 37 –
   Lecture 3, September 23, 2003

          Inter-process communication

          Concurrency control and deadlock

– 38 –

          Deadlock is a problem that can exist when a group
           of processes compete for access to fixed resources.

          Causes:
            Bad algorithm/program design
            Or unavoidable

– 39 –
Deadlock Definition
  Def: deadlock exists among a set of processes if every
   process is waiting for an event that can be caused only by
   another process in the set.
  Example: two processes share 2 resources that they must
   request (before using) and release (after using). Request
   either gives access or causes the proc. to block until the
   resource is available.

          Process 1:                   Process 2:
            request tape                 request printer
            request printer              request tape
            … <use them>                 … <use them>
            release printer              release tape
            release tape                 release printer
 – 40 –
Four Conditions for Deadlock

             Deadlock can exist if and only if 4 conditions hold

              1.   mutual exclusion: at least one resource must be held in a non-
                   sharable mode.
              2.   hold and wait: there must be a process holding one resource and
                   waiting for another.
              3.   No preemption: resources cannot be preempted.
              4.   circular wait: there must exist a set of processes
                   [p1, p2, …, pn] such that p1 is waiting for p2, p2 for p3, and so

 – 41 –
Resource Allocation Graph

           Deadlock can be described through a resource allocation
             The RAG consists of a set of vertices P={P1,P2,…,Pn} of
              processes and R={R1,R2,…,Rm} of resources.
             A directed edge from a processes to a resource, Pi->Rj,
              implies that Pi has requested Rj.
             A directed edge from a resource to a process, Rj->Pi,
              implies that Rj has been allocated by Pi.
             If the graph has no cycles, deadlock cannot exist. If the
              graph has a cycle, deadlock may exist.

 – 42 –
Resource Allocation Graph Example

                R1               R3

                .                .

          P1                P2             P3

                .                     .



          There are two cycles here: P1-R1-P2-R3-P3-R2-P1
          and P2-R3-P3-R2-P2, and there is deadlock.

 – 43 –
Possible Approaches

           Deadlock Prevention: ensure that at least 1 of
            the necessary conditions cannot exist.

              Mutual exclusion: make resources shareable (isn’t
               really possible for some resources)
              hold and wait: guarantee that a process cannot hold a
               resource when it requests another, or, make processes
               request all needed resources at once, or, make it release
               all resources before requesting a new set
              circular wait: impose an ordering (numbering) on the
               resources and request them in order. Is it good?

 – 44 –
More Possible Approaches

 Deadlock Avoidance
    General idea: provide information in advance about what
     resources will be needed by processes to guarantee that
     deadlock will not exist.
 E.g., define a sequence of processes <P1,P2,..Pn> as safe if for
  each Pi, the resources that Pi can still request can be satisfied
  by the currently available resources plus the resources held
  by all Pj, j < i.
    This avoids circularities.
    When a process requests a resource, the system grants or
     forces it to wait, depending on whether this would be an
     unsafe state.

 – 45 –
 Processes p0, p1, and p2 compete for 12 tape drives
                   max need          current usage     could ask for
          p0             10                     5              5
          p1             4                      2              2
          p2             9                      2              7
                                     3 drives remain
 Current state is safe because a safe sequence exists:
          p1 can complete with current resources
          p0 can complete with current+p1
          p2 can complete with current +p1+p0

 If p2 requests 1 drive, then it must wait because that state
  would be unsafe. Why?
 – 46 –
The Banker’s Algorithm
 Banker’s algorithm decides whether to grant a resource
   request. Define data structures.
   int    n;                  //   # of processes
   int    m;                  //   # of resources
   int    available[m];       //   # of avail resources of type i
   int    max[n][m];          //   max demand of each Pi for each Rj
   int    allocation[n][m];   //   current allocation of resource Rj
                              //   to Pi
   int need[n][m];            //   max # of resources that Pi may
                              //   still request of Rj

   Let request[i] be a vector of the # of instances of resource Rj
   that Process Pi wants.

 – 47 –
    The Basic Banker’s Algorithm

         if (request[i] > available[i]) {
             // wait until resources become available

         //   resources are available to satisfy request, assume
         //   that we satisfy the request, we would then have

         available = available – request[i];
         Allocation[i] = allocation[i] + request[i];
         need[i] = need[i] – request[i];

         //   now check if this would leave us in a safe state
         //   if yes then grant the request otherwise the process
         //   must wait

– 48 –
Safety Check in Banker’s Algorithm
int work[m] = available;       // to accumulate resources
boolean finish[n] = {FALSE,…}; // non finished yet

do {
       find an i such that (finish[i]==FALSE) && (need[i]<work)

       // process i can complete all of its requests

       finish[i] = TRUE;            // done with this process

    work = work + allocation[i]; // assume this process gave
                                 // all its allocation back
} until (no such i exists);

if (all finish entries are TRUE) {
    // system is safe. i.e., we found a sequence a processes
    // that will lead to everyone finishing

 – 49 –
Deadlock Detection

           If there is neither deadlock prevention nor
            avoidance, then deadlock may occur.

           In this case, we must have:
              an algorithm that determines whether a deadlock has occurred
              an algorithm to recover from the deadlock

           This is doable, but it’s costly

 – 50 –
Deadlock Detection Algorithm
int work[m] = available;       // to accumulate resources
boolean finish[n] = {FALSE,…}; // non finished yet

for (i = 0; i < n; i++) {
    if (allocation[i] is zero) { finish[i] = TRUE; }

do {
       find an i such that (finish[i]==FALSE && request[i]<work)

       // process I can finish with currently available resources

       finish[i] = TRUE;           // done with this process

    work = work + allocation[i]; // assume this process gave
                                 // all its allocation back
} until (no such i exists);

if (finish[i] == FALSE for some i) {
    // System is deadlocked with Pi in the deadlock cycle
  – 51 –
Deadlock Detection

           Deadlock detection algorithm is expensive. How
            often we invoke it depends on:
             how often or likely is deadlock
             how many processes are likely to be affected when deadlock occurs

 – 52 –
Deadlock Recovery
         Once a deadlock is detected, there are 2 choices:
      1.    abort all deadlocked processes (which will cost in the repeated
            computations necessary)
      2.    abort 1 process at a time until cycle is eliminated (which requires re-
            running the detection algorithm after each abort)
         Or, could do process preemption: release
          resources until system can continue. Issues:
           selecting the victim (could be clever based on R’s allocated)
           rollback (must rollback the victim to a previous state)
           starvation (must not always pick same victim)
         These are common database inspired methods,
          within an interactive OS none are really that
 – 53 –
Real Life Deadlock Prevention
    Fewer resources (locks) means less deadlock potential, but also less
     potential concurrency. So there is a tradeoff here

    For really simple applications acquiring all the resources up front is
     fairly common, but not always practical.

    Programmers most often use common sense in the ordering of
     resources acquisition and releases
       Resource levels is one area that helps development

    In complicated software systems resource levels are not practical. (e.g.,
     memory management and the file system often recursively call each
     other), and deadlock prevention is far more a matter of fine tuning the
     locks and understanding the exact scenario in which locks are acquired

 – 54 –
    Important Points to Remember

 When a deadlock does happen, by definition, it will not go
   away; therefore debugging deadlocks is somewhat simpler
   because all the processes are stuck and can’t squirm out of
   the way.

 Identifying a deadlock is sometimes easier then
   understanding how to prevent the deadlock

 No magic bullet here, but a lot of common sense

 – 55 –
   Distributed Deadlock

          What is distributed deadlock?

             Computer A                      Computer B
           Read item “a” locally                 Read item “b” locally

    Write item “b” on the other machine   Write item “a” on the other machine

– 56 –

          Chapter 2, section 2.3

          Chapter 3, Section 3.2, 3.3., 3.4, 3.5, 3.6

– 57 –
   Next Lecture

          Memory management (Reading: Chapter 4)

          Programming Concepts

          Project Assignment #1

– 58 –

Shared By: