Interprocess Communication _ synchronization and Deadlocks by dfhdhdhdhjr


									 Interprocess Communication ,
synchronization and Deadlocks

                Concurrent Process
Operating System consists of a collection of 2 type processes –
1.  Operating System Processes - execute system code.
2.  User processes – execute user’s code.

-    All these processes execute in concurrent manner. Concurrency
     means parallel execution of a program.
-    Concurrent program specifies 2 or more sequential programs that
     may be executed concurrently as parallel processes.

e.g. Reservation System that involves processing transactions from many
     terminals has a natural specification as a concurrent program in which
     each terminal is controlled by its own sequential program.

Interprocess Communication and
In order to cooperate concurrently, executing processes must communicate
and synchronize. Interprocess communication is based on the use of
shared variables (variables that can be referenced by more than one process)
or message queues.

Synchronization is often necessary when processes communicates.
Processes executed with unpredictable speeds. Yet to communicate one
process must perform some action or send some message to other detects.
Therefore Inter process Synchronization is a set of constraints or protocols
used to preserve system integrity an consistency when concurrent processes
share resources that are serially reusable.

Critical Section          – refers to the code of a process, whereby it
accesses a shared resource. Example-

There is an common variable A=1000

P1 –
Po, P1 are executing there respective critical sections, to modify the value
    of A, The shared value A represents the common resource between
    the two processes.

Racing Problem – Suppose 2 processes Po, P1 are accessing a
       common integer A as above with initial value 1000.
       The execution of these processes is expected to change value of A
       into 1100, this can be possible in 2 sequences only
1.     Po follows P1
2.     P1 follows Po
      but suppose in any case if Po and P1 are allowed to execute in
      arbitrary manner, that any one of the following can occur any time.
Like : Po, P1, Po, P1
Po, P1, Po etc.
      This will produce wrong end value. This is called as Racing problem.
      This implies that the concurrent processes are racing with each other
      to access a shared resource in an arbitrary order.

Mutual Exclusion -
    To avoid this problem it is necessary that at a time only one
    processes should be execute in its critical section, not both. This is
    known as Mutual Exclusion of two processes. In which a single
    process temporarily excludes all others from using a shared resource
    in order to ensure the system’s integrity.
Requirements to solve critical Section Problem –
1.  Ensure mutual exclusion between processes accessing the protected
    shared resource.
2.    It should ensure that any process terminating outside of its critical section does not affect
      other processes sharing same resource.
3.    When more than one process wishes to enter the critical section, grant entrance to one of
      them for finite time.

First Algorithm
Process p1
       while true do
          while turn=p2 do {keep testing};
          critical section;
          turn :=p2;
End p1
Process p2
       while true do
          while turn=p1 do {keep testing};
          critical section;
          turn :=p1;
End p2                        
-    In the algorithm process p1 busily loops until variable turn becomes
     p1, and only then enters its critical section.
-    - when p1 completes critical section it sets turn to p2.
-    In this algorithm there are certain problems-
1.   Each process need to know name of all processes to give turn.
2.   If process crashes outside its critical section, so the process can
     never give turn to other process. Like if p1 crashes outside its
     critical section, so it can never give turn back to the process p2.
3.   If process crashes inside critical section the crashed process will
     not be able to announce that the shared resource is free, so that
     all other processes will still wait.

Second Algorithm
Process p1
      while true do
         while p2 do {keep testing};
         critical section;
End p1
Process p2
      while true do
         while p1 do {keep testing};
         critical section;
End p2
    This algorithm is designed to avoid overdependence on other
    processes by using two flags one to make current process true and
    after using critical section make it false. When ever other process
    wants to access critical section it will check whether resource is being
    used by other processes or not by checking its flag true or false, if it
    finds all flag as false it will set its flag as true and enters into the
    critical section by making itself true.
Problem –
    Consider if both processes p1 and p2 are outside their critical section,
    and flags of both processes are false. Same time both wants to enter
    its critical section p1 will find p2 as false and p2 will find p1 as false
    both will set their flag as true.

Third Algorithm -
Process p1
      while true do
         while p2 do {keep testing};
                   critical section;
End p1
Process p2
      while true do
         while p1 do {keep testing};
                   critical section;
End p2
    In third algorithm is a trial to remove 2nd’s problem by preceding the
    testing of another process’s flag by setting its own flag first. Like
    process p1 first sets its flag and then tests p2’s flag, if it finds p2 is
    false then it may safely proceed to the critical section.
Problem –
    If process p1 wishes to enter the critical section and sets p1 as true. If
    p2 is also wishes to enter its critical section at the same time it set its
    flag just before p1 tests for p2’s flag (i.e p2 keeptesting) it will find find
    p1 as true and keep waiting for infinite time.

      Mutual exclusion did not give the solution of problems arises in Inter
      Process Communication. To overcome these problems next solution
      has given by Dutch mathematician Dekker and Dijkstra by introducing
      Semaphores, which gained wide acceptance and implemented in
      several commercial O.S.
      A semaphore is a variable which accepts non-negative values less
      than equal to 1. through two primitive operations – wait and signal.
-     When one process has modifies the semaphore no other process can
      simultaneously modify the same semaphore value.
var b : semaphore;
Process p1
      while true do
         critical section

Table to show the behavior of Semaphore for three process p1, p2 and p3.
b=1 (free), 0 (busy)
Time p1          p2     p3      b     Queue
T1      -        -      -       1     -
T2      wait(b) wait(b) wait(b) 0     p1,p2,p3---
T3      cs       wait   wait    0     p1,p2,p3—
T4      signal wait     wait    1     p2, p3
T5      -        wait   cs      0     p3,p2
T6      -        wait   cs      0     p3,p2
T7      wait     wait   signal 1      p2,p1
T8      wait     cs     -       0     p2, p1

Properties of Semaphores –
-    When semaphores are used, modifications of code or restructuring of
     processes and modules do not generally necessitate changes in other

-    There can be a situation in which some processes are making
     progress toward completion but one or more other processes are
     locked out of the resource, is called indefinite postpone, also know as
     livelock. And affected processes are called as starved. To prevent
     livelocks, some service discipline is applied among waiting processes.
     e.g. process priority, round robin , request is only for finite time. Etc.
-    Generally semaphore is used to provide runtime serialization i.e. one
     process at a time in critical section. Granularity of semaphore means
     how many processes are assigned to one semaphore. The finest
     granularity is accomplished by dedicating a separate semaphore to
     guard each specific shared resource.

Hardware Support for Mutual Exclusion –

Pessimistic and Optimistic Concurrency control – there can be two kind
     of concurrent processes pessimistic and optimistic. Pessimistic means
     assume worst case and to defend against it.
Like- block everything can interfere, unblock only one process of highest
      while in Optimistic approach assumption is like no or few processes
      will interfere to each other.
Like read value of variable if free enter into critical section.

2. Disable/ Enable Interrupts – Use of these interrupts is most widely
      applicable     way     to   implementing      mutual  exclusion   for
      multiprogramming systems operating on Uniprocessor. Any process
      wishing to enter its critical section that will be done in following
      manner –
Critical section
DI- disabling interrupts to prevent any interference
EI- releasing critical section
Basically this method is Pessimistic approach of concurrency control.
3. Test and Set - Test and set is designed to resolve conflicts among
      contending processes my making it possible for only one process to
      receive permit to enter its critical section. When several concurrent
      processes are competing, the TS instruction ensures that only one of
      them proceeds to use the resource.
Producers and Consumers Problem –
      There a set of Processes, in which some produce data called as
      producer and some use to consume data called as consumer. There
      is a need to design Synchronization protocol that allows producer an
      consumer to operate concurrently in such a way that items consumed
      in the exact order in which they are produced.
Producer/Consumer with Unbounded buffer –
      We assume that buffer is of unbounded capacity. After the
      initialization of system producer will be the first process to produce
      item. The buffer can be implemented in the form of array or linked list.
process producer
          while true do
                     place in buffer;
                     signal (produced);
Process consumer
         while true do
                    wait (produced);
                    take from buffer;
      Produce is a first activity which is used to place items in buffer. And at
      the consumer side consumer will wait till the signal(produced) after
      that it will enter into critical section to consume items. This algo allows
      high degree of concurrency, but it cannot guarantee system integrity
      in case if no. of consumer and producers are increased. The major
      problem with this algo is there is no protocol for producer for entering
      inside critical section. This situation can be improved by some other
      algo. Where there is one more semaphore to protect buffer-

Signal (b);
Process producer
        while true do
              wait (b);
              place in buffer;
              signal (b);
              signal (produced);
Process consumer
        while true do
              wait (produced);
              wait (b);
              take from buffer;
              signal (b);
    In a multiprogramming environment several processes may compete
    for a fixed number of resources. A process requests resources and if
    the resources are not available, it enter a wait state. It may happen
    those resources are being held by some other processes which is in
    waiting condition. In that case neither request can be granted. This is
    called as deadlock.
    “Deadlock is a situation where a group of processes is
    permanently blocked as a result of each process having acquired
    a set of resources needed for its completion and having to wait
    for release of the remaining resources held by others thus
    making it impossible for any of the deadlocked processes to
    proceed. “
Necessary Conditions for deadlock is as follows –
1.  Mutual exclusion
2.  Hold and wait
3.  No preemption
4.  Circular wait
1.   Mutual exclusion – the shared resource is used in mutually exclusive
2.   Hold and wait – each process continues to hold already allocated
     process waiting for some process to acquire.
3.   No Preemption – resource allocated to a processes can be released
     by only voluntary action or time sharing bases.
4.   Circular waiting – each process hold one or more resources being
     requested by next process.
Types of resources –
1.   Reusable resources – can be used by some other process. one
     resource is allocated to one process and if it is again available or free
     then can be allocated or used by some other process also like-
     printer, memory hard disk , CPU etc. the deadlock situation can occur
     if same resource is requested by some process which is used by
     some other processes which is in waiting condition.
2.   Consumable Resources – cannot be shared by some other process.
     Like messages sender is sending request that will be consumed by
     receiver and same message cannot be shared by others. Deadlock
     can occur through infinite waiting of message. Like sender is waiting
     for acknowledgement of message received.
Four techniques for deadlock handling –
-    Deadlock Prevention
-    Deadlock avoidance
-    Deadlock detection
-    Deadlock Recovery
Deadlock Prevention –
a)   Hold and wait - con be eliminated by forcing a process to release all
     the resources acquired by it.
     Basically there are 2 possible strategies for it –
     1. The process request all needed resources at once.
     The first case is generally possible with batch processes. But this
     approach creates a problem that all the resources acquired by it will
     be unavailable to other requesting processes. Therefore, some
     process can be idle due to the unavailability of resources. And some
     resources will also be idle because due to other occupied resources
     they can not be utilized for the sake of deadlock prevention. This
     approach gives low resource utilization and reduction in the level of
2. The process request resources incrementally
     An other alternative approach to prevent deadlock is acquire resource
     incrementally, as needed and to prevent deadlock release all
     resources held by a process when it request a temporarily available
b) No- Preemption – deadlock condition can be denied by allowing
     preemption, i.e. by giving authority to system to revoke ownership of
     resource occupied by a process when needed.
c)   Circular – wait – there is one way to prevent this condition by linear
     ordering of system resources. One request which is available is
     allocated and other request stored in a queue. Or resources will be
     allocated only in increasing order according to the availability.
Deadlock Avoidance –
     The basic approach is to grant only those resources which are
     available and cannot create deadlock in any case. The resource
     allocator keeps track of the number of allocated and number of
     available resources of each type. This can be done by general
     resource graph or 2-dimentional matrix with processes in row and
     resources in column.
     p1        p2       p1     P2          R1        R2
R1   1         0        0      1           0         2
R2   0         0        1      1
     allocated          Claims             Available

     Most important part of deadlock avoidance is safety test. A state is
     regarded to be safe if all processes already granted resources would
     be able to complete on some order even if each such process were to
     use all resources that it is entitled to there are to test – This is also
     called as Banker’s Algorithm
     Resource Request –
1.   Check request for resource, whether authorized or not.
2.   If resource is not available for allocation, keep in suspend queue.
3.   When it is available check no. of request and allocate, update
     allocated, claim and available database.
4.   update database if claims<=available allocate resource and mark it.
5.   If all processes are marked then system state is safe.

Release Request –
When a resource is released, update the available data structure.
     p1        p2      p1      P2      R1       R2
R1 1           0       0       1       0        1
R2 0           1       1       0
     allocated         Claims          Available
DeadLock Detection and Recovery
     Instead of restricting system by preventing and avoiding deadlocks,
     some system grants available resource to requesting processes
     freely. These system occasionally check for deadlocks whether
     deadlock is there or not ?. Draw a graph among processes and
     resources and check whether cycle is existing or not. A simple
     approach to check cycles is that reduce graph by resources and
     processes. If no edge is left at the end then there is no cycle and
     system is in safe state, no deadlock but if cycle exist then system is in
     deadlock condition.

     R1        R2        R1     R2        R1        R2
P1   1         1         0      1         0         0
P2   0         1         1      0
     Allocated           Requested        Available




Algorithm –
1.   From all data base allocated, available and request, unmark all active
2.   Find unmarked process such that
     requested<=available if found marked, update available else wait
3.   If all processes are marked then system is not in deadlock condition.


To top