message-passing

Document Sample
message-passing Powered By Docstoc
					           Message Passing Concepts and Synchronization Applications
                      CS 350 – Fall 2002, 10/23/02, 9/17/03

Message Passing (Reference: Stallings, “Operating Systems”, 4th ed., section 5.6)

Below are some of the key ideas on this topic.

Fundamental requirements for concurrent processing – cooperative or interactive
processing: Synchronization and communication. We saw these elements in the use
of semaphores in conjunction with shared memory. A message passing capability
combines both of these elements into a single facility. In addition message passing
has the further advantage that it lends itself to distributed processing over a
networked on set of different physical processors, as well as for uniprocessor
applications. The former is more difficult with semaphore and other techniques due
to the problems of managing interrupts across multiple interconnected machines.

Basic message passing primitives:

send( destination, message)
receive( source, message)

In general destination is where the message it to be sent to, and source is where the
receiver wants to receive a message from.

“message” is the message sent (or a reference to it) when sending, and a place (or a
reference to it) to put the message when receiving.

Design characteristics of message passing system is given by Stallings, table 5.4,
page 242 (repeated below):




   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 1
Synchronization characteristics:

Because of the “causal” relationship between send and receive, message passing has
natural or implicit synchronization characteristics.

Send blocking: Either the sending process is blocked until the message is received,
or it is not (or also, in some cases, when a message queue is at maximum (UNIX)).

Receive blocking: there are two possibilities:
  - If a message has been sent and is available, the message is received and the
     receiver continues execution.

    - If there is no message waiting, then either (a) the process is blocked until a
      message arrives, or (b) the process continuers to execute abandoning the
      receive attempt.


Blocking/unblocking send and receive can occur in any combination, for example:

-      Blocking send and blocking receive – very tight synchronization (rendezvous)

-      Nonblocking send, blocking receive – most common.

-      Nonblocking send, nonblocking receive – loosest synchronization.
    Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 2
A more in-depth discussion of advantages and disadvantages and other variations of
these schemes is given on page 243 of Stallings book.

Addressing Characteristics

Send primitive: specify which process is to receive the message.
Receive primitive: indicate the process source of a message to be received.

Direct addressing:

Send primitive has a parameter identifying the intended destination process.
Receive primitive:
-     Explicit direct addressing: specifically designate a sending process a message
      is expected from, the process must know in advance from which process a
      message is expected – used in cooperative processing.
-     Implicit direct addressing: The “source” parameter is a “return
      variable”/pointer indicating where the message came from; the sender fills this
      in with its identification. No need for the receiver to know in advance which
      process to expect a message from.

Indirect addressing:

Messages not sent directly to a receiver, but sent to a shared data structure or queue
sometimes called a mail box. This decouples the sender and receiver for more
flexibility. See Stallings fig. 5.24 for various indirect addressing schemes (repeated
below):




   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 3
Relationship between sender and receiver can be:
One-to-one: Allows a private communication link to be set up between two
processes.
Many-to-one: the bottom part of fig. 5.24 – a client/server model. The requesting
clients are on the left and the server is on the right. Mail box is associated with the
remote server service and is known as a “port” (see discussion on sockets).
One-to-many: provides a broadcast capability

The association of processes to mailboxes can be static or dynamic. In static
association (typically one-to-one), mailboxes are permanently assigned to a process
until the process is destroyed. In dynamic associations, (typically many senders),
mailboxes assigned and removed dynamically (via “connect”/disconnect” primitives).

Mailbox ownership: Generally ownership is assigned to the creating process. When
this process is destroyed, the mailbox is destroyed. Sometimes, ownership is by the
OS, which in this case, an explicit command is required to destroy the mailbox.
Message format: Typically has a header and a body. The header is made up of
message type, Destination ID, Source ID, message length, and control information.
The body is the message content.
Queuing discipline: The queuing discipline is typically FIFO, but message message
priorities may be used based on message type or designated by sender. Also a
scheme may allow a receiver to inspect a queue and select the message to receive
next.
   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 4
Mutual Exclusion:
Mutual exclusion could be achieved by message passing. Here is how it is done:

      We assume the use of blocking receive and nonblocking sends. A set of
      concurrent processes share a mailbox which we call mutex because of the
      obvious analogous to a semaphore mutex. This mailbox is initialized to
      contain a single message with null content. A process wishing to enter its
      critical section first attempts to receive this message. If the mailbox is empty,
      this process blocks. Once a process has acquired the message, it performs its
      critical section and then places the message back into the mailbox. Thus, the
      message serves as a token that is passed from process to process. Fig. 5.26
      from Stallings shows how message passing accomplishes mutual exclusion
      (below):




      Fig 5.26 assumes that more that if one process performs the receive operation
      concurrently:
       If there is a message, it is delivered to only one process and the others are
         blocked, or

       If the message queue is empty, all processes are blocked. When a message
        is available, only one blocked process is activated and takes the message.


   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 5
The bounded buffer – producer/consumer problem using message passing
pseudocode (below from Stallings fig. 5.27):




We use the mutual exclusion power of message passing. Compare this with the
semaphore solution in fig. 5.16. This solution capitalizes on the ability of message
passing to both pass data in addition to provide mutual exclusion.
Two mailboxes are used. As the producer generates data, it is sent as a message to
the mailbox mayconsume. As long as there is at least one message in that mailbox,
the consumer can consume. Thus mayconsume serves as the buffer (contains
data) ; the data in the buffer are organized as a queue of messages. The buffer has a
fixed size given by capacity. Initially the mailbox mayproduce is filled with some
number of NULL messages equal to the capacity of the buffer. The number of
messages in mayproduce shrinks with each production and grows with each
consumption.


   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 6
Note the synchronization aspect is achieved, in addition to keeping track of the full
and empty buffer. The two examples given here demonstrate tht the properties of
both mutex and counting semaphores are modeled using message passing!




   Chapters 4 and 7 (Silberschatz, 6th ed.) - Message Passing - Fall 2002, 10/23/02   page 7