Docstoc

Message Passing Interface _MPI_

Document Sample
Message Passing Interface _MPI_ Powered By Docstoc
					A 10-page Minimal Introduction to MPI
By Xizhou Feng and Sanjay Kumar
2002-02-13

1. What is MPI (Message Passing Interface)?
 A message-passing library specification
-- message-passing model
-- not a compiler specification
-- not a specific product

 For parallel computers, clusters, and heterogeneous networks

2. Features of MPI
 General
-- Communicators combine context and group for message security
-- Thread safety

 Point-to-point communication
-- Structured buffers and derived datatypes, heterogeneity
-- Modes: normal (blocking and non-blocking), synchronous, ready (to allow access to
fast protocols), buffered

 Collective
-- Both built-in and user-defined collective operations
-- Large number of data movement routines
-- Subgroups defined directly or by topology
 Application-oriented process topologies
-- Built-in support for grids and graphs (uses groups)

 Profiling
-- Hooks allow users to intercept MPI calls to install their own tools

 Environmental
-- inquiry
-- error control

 Non-message-passing concepts not included:
-- process management
-- remote memory transfers
-- active messages
-- threads
-- virtual shared memory

3. Computation model of MPI
    SPMD (Single Program Multiple Data) mode
    Every processor gets a rank number
    Code is executed based on rank number checking in the program
      Everything stems from the basic send and receive commands

4. How large Is MPI?
 MPI is large (125 functions)
 MPI is small (6 functions)

       MPI_Init
       MPI_Finalize
       MPI_Comm_size
       MPI_Comm_rank
       MPI_Send
       MPI_Recv

 MPI is just right


5. Where to use MPI?

      You need a portable parallel program
      You are writing a parallel library
      You have irregular or dynamic data relationships that do not fit a data parallel
       model

6. The simplest MPI program
#include "mpi.h"            //provides basic MPI definitions and types
#include <stdio.h>

int main( argc, argv )
int argc;
char **argv;
{
MPI_Init( &argc, &argv );    // starts MPI
MPI_Comm_rank( MPI_COMM_WORLD, &rank ); //Who am I
MPI_Comm_size( MPI_COMM_WORLD, &size ); //How many processes are there?
printf( "Hello world! I'm %d of %d\n", rank, size );
MPI_Finalize();              // exist MPI
return 0;
}

7. Compile and Run
Compile: mpicc -o hello hello.c -lpmi
Run:     mpirun -np 2 –machinefile themachinefile hello

8. point-to-point communication: Sending and Receiving messages

int MPI_Send
(
     void* message,           /*data buffer for data to be sent*/
     int count,               /*number of items to send */
     MPI_Datatype datatype, /*type of each item */
     int dest,              /*destination rank of process*/
     int tag,               /*message type*/
     MPI_Comm comm.         /*communicator*/
);

int MPI_Recv
(
     void* message,         /*data buffer for data to be rec'd*/
     int count,             /*number of items to receive */
     MPI_Datatype datatype, /* type of each item */
     int source,            /* source process rank of data */
     int tag,               /* message type*/
     MPI_Comm comm,         /* communicator */
     MPI_Status* status     /* status structure for debug/check */
);

Communicator      -- a collection of processes that communicate each other.
Tag               -- message type, an integer added to the message
Data Type         -- MPI predefined datatype for portability
        MPI_CHAR
        MPI_SHORT
        MPI_INT
        MPI_LONG
        MPI_UNSIGNED_CHAR
        MPI_UNSIGNED_SHORT
        MPI_UNSIGNED_INT
        MPI_UNSIGNED_LONG
        MPI_FLOAT
        MPI_DOUBLE
        MPI_BYTE
        MPI_PACKED
Source            -- can be a wildcard, using MPI_ANY_SOURCE

9. Communicators
All MPI communication is relative to a communicator which contains a context and a
group. The group is just a set of processes. We can define our own communicators.
MPI_Comm_rank( oldcomm, &rank );
MPI_Comm_split( oldcomm, row, rank, &newcomm );




MPI_Comm_rank( oldcomm, &rank );
MPI_Comm_split( oldcomm, column, rank, &newcomm2 );



Another way to create a communicator with specific members is to use
MPI_Comm_create create a communicator from a group.

While group can be created in many ways, for example:
MPI_Group_incl specifies specific members
MPI_Group_excl excludes specific members
MPI_Group_range_incl and MPI_Group_range_excl use ranges of members
MPI_Group_union and MPI_Group_intersection creates a new group from two
existing groups.

10. Collective communication: all processes in a communicator involve in the
    communication
 Available collective communication patterns
Broadcast
int MPI_Bcast
(
     void* message,          /* data buffer for data to be sent */
     int count,              /* number of items to send         */
     MPI_Datatype datatype, /* type of each item             */
     int root,               /* root from whom to broadcast        */
     MPI_Comm comm           /* communicator                 */
);
MPI_Bcase send a copy of the data in message on the process with rank root to each
process in the communicator comm

Reduce-binary tree
int MPI_Reduce
(
     void* operand,             /* data buffer from which to reduce */
     void* result,              /* data buffer into which to reduce */
     int count,                 /* number of items to receive      */
     MPI_Datatype datatype, /* type of the items               */
     MPI_Op operator,           /* reduction operation          */
     int root,                  /* rank onto whom to reduce
     MPI_Comm comm              /* communicator */
);
MPI_Reduce combines the operands stored in the memory referenced by operand using
operator and stores the result in *result on process root.


Reduction onto all processes-butterfly
int MPI_Allreduce
(
     void* operand,            /* data buffer from which to reduce */
     void* result,             /* data buffer into which to reduce */
     int count,                /* number of items to receive       */
     MPI_Datatype datatype, /* type of the items              */
     MPI_Op operator,          /* reduction operation           */
     MPI_Comm comm             /* communicator                 */
);
MPI_Allreduce combines the operands stored in the memory referenced by operand
using operator and stores the result in *result on all the processes.

Some built-in collective operations
    MPI_MAX /*maximum */
    MPI_MIN /* minimum */
    MPI_SUM /* sum */
    MPI_PROD /* product*/
    MPI_LAND /* logical and*/
We can define our own operations

11. Topology
MPI provides routines to provide structure to collections of processes.
Example:




Some MPI function for topology
MPI_Cart_create --Define a Cartesian topology
MPI_Cart_shift --find neighbors
MPI_Cart_cords -- map rank to coordinate
MPI_Card_rank   --map coordinate to rank
MPI_Cart_sub    --partition
MPI_Graph_create --create a general graph topology

12. buffering, blocking and communication mode




Method 1: not efficient
Better: need test for completion

Blocking communication:
  -- MPI_Send does not complete    until buffer is empty (available for reuse).
  -- MPI_Recv does not complete    until buffer is full (available for use).


Non-blocking operations return (immediately) ``request handles'' that can be waited on
and queried:

MPI_Isend(start, count, datatype, dest, tag, comm, request)
MPI_Irecv(start, count, datatype, dest, tag, comm, request)
MPI_Wait(request, status)
One can also test without waiting: MPI_Test( request, flag, status)

It is often desirable to wait on multiple requests. An example is a master/slave program,
where the master waits for one or more slaves to send it a message.
MPI_Waitall(count, array_of_requests, array_of_statuses)
MPI_Waitany(count, array_of_requests, index, status)
MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices,
array_of_statuses)
There are corresponding versions of test for each of these

MPI provides mulitple modes for sending messages:

       Synchronous mode ( MPI_Ssend): the send does not complete until a matching
        receive has begun. (Unsafe programs become incorrect and usually deadlock
        within an MPI_Ssend.)
       Buffered mode ( MPI_Bsend): the user supplies the buffer to system for its use.
        (User supplies enough memory to make unsafe program safe).
       Ready mode ( MPI_Rsend): user guarantees that matching receive has been
        posted.
        -- allows access to fast protocols
        -- undefined behavior if the matching receive is not posted

        on-blocking versions: MPI_Issend, MPI_Irsend, MPI_Ibsend

13. Datatype again
MPI datatypes have two main purposes

      Heterogenity --- parallel programs between different processors
      Noncontiguous data --- structures, vectors with non-unit stride, etc.

Basic datatype, corresponding to the underlying language, are predefined.

The user can construct new datatypes at run time; these are called derived datatypes.

Elementary:
        Language-defined types (e.g., MPI_INT or MPI_DOUBLE_PRECISION )
Vector:
        Separated by constant ``stride''
Contiguous:
        Vector with stride of one
Hvector:
        Vector, with stride in bytes
Indexed:
        Array of indices (for scatter/gather)
Hindexed:
        Indexed, with indices in bytes
Struct:
        General mixed types (for C structs etc.)


14. Profiling
 All routines have two entry points: MPI_... and PMPI_....
 This makes it easy to provide a single level of low-overhead routines to intercept MPI
calls without any source code modifications.
 Used to provide ``automatic'' generation of trace files.




15. Two freely available MPI implementations
MPICH
LAM
Both support Unix and Windows NT/2000
16. More information

Books:
Using MPI by William Gropp, Ewing Lusk, and Anthony Skjellum
MPI: The Complete Reference
Parallel Programming with MPI

Website:
http://www.mcs.anl.gov/mpi
http://www.mpi-forum.org (MPI Forum)
http://www-unix.mcs.anl.gov/mpi/index.html


William Gropp, Tutorial on MPI: The Message-Passing Interface, http://www-
unix.mcs.anl.gov/mpi/tutorial/gropp/talk.html

David W. Walker, MPI: from Fundamentals to Applications
http://www.epm.ornl.gov/~walker/mpitutorial/

MPI Programming and Subroutine Reference, IBM,
http://usgibm.nersc.gov/docs/pe/html/d3a64mst02.html, or
http://www.ahpcc.unm.edu/UserSupport/Docs/ibm-mpi-ref.pdf

Writing Message-Passing Parallel Programs with MPI, http://www.epcc.ed.ac.uk/epcc-
tec/documents/mpi-course/mpi-course.book_1.html



Note: The contents is adapted from Dr. Buell’s notes, William Gropp’s paer and David
W. Walker’s tutorial.

				
DOCUMENT INFO