How to install Globus Toolkit v.2.4 and MPICH-G2

Document Sample
How to install Globus Toolkit v.2.4 and MPICH-G2 Powered By Docstoc
					                   Parallel and Distributed Group, Politehnica University of Bucharest
                                     UPB Grid Days 12-30 July, 2004




      DOC.ID       UPB-GRID-I-T02/04
      Group        Parallel and Distributed Systems
      Type         Technnical Memo
      Subject      Installing Globus Toolkit 3 and MPICH - G2
      Team         Asavei Victor s0rcer0r13@yahoo.com
                   Herisanu Alexandru aherisanu@yahoo.com
                   Ifrim Mircea mifrim@hpc.pub.ro
      To           C.Cirstoiu, A.Iosup, R.Voicu
      Date         30.07.2004




Purpose
       This memo presents in detail the process of installing Globus Toolkit 2.4 and Mpich-G2.
We also tried to install Mpich-G2 together with Globus Toolkit 3.2 but in the end the two programs
proved to be incompatible. The next generation of Mpich will be called Mpich-G4 and will work with
Globus Toolkit 4.




                                                 -1-
Parallel and Distributed Group, Politehnica University of Bucharest
                  UPB Grid Days 12-30 July, 2004




    ( Page intentionally left blank )




                                       -2-
                                Parallel and Distributed Group, Politehnica University of Bucharest
                                                  UPB Grid Days 12-30 July, 2004


       Table of contents
Purpose ..................................................................................................................................................... 1
Table of contents ...................................................................................................................................... 3
1. Introduction .......................................................................................................................................... 4
2. Tools description .................................................................................................................................. 4
   2.1. GRID package ............................................................................................................................... 4
   2.2. Additional tools ............................................................................................................................. 4
      2.2.1. Mpich-G2 ............................................................................................................................... 4
3. Installation process ............................................................................................................................... 5
   3.1. Testbed description ....................................................................................................................... 5
   3.2. Prerequisites .................................................................................................................................. 5
      3.2.1. GRID package ........................................................................................................................ 6
      3.2.2. Additional tools ...................................................................................................................... 6
      3.2.3. Additional files ....................................................................................................................... 6
   3.3 Installation procedure ..................................................................................................................... 6
      User setup ......................................................................................................................................... 6
      Installing Globus Packaging Toolkit 3.0.1 (GPT)............................................................................ 7
      Installing Globus Toolkit 2.4 ........................................................................................................... 8
      Testing GRAM ............................................................................................................................... 13
      Testing MDS .................................................................................................................................. 14
      Setting up a GridFTP Server .......................................................................................................... 14
      Installing MPICH-G2 ..................................................................................................................... 16
      How to use MPICH-G2 .................................................................................................................. 17
   3.4 Example MPI Application ............................................................................................................ 20
   3.5. Testing procedure ........................................................................................................................ 24
   3.6. Installation quick-list ................................................................................................................... 28
4. Activity report .................................................................................................................................... 29
   4.1. Work completed .......................................................................................................................... 29
   4.2. Problems ...................................................................................................................................... 29
   4.3. Remaining work .......................................................................................................................... 29
5. Conclusions ........................................................................................................................................ 29
   Bibliography ....................................................................................................................................... 30




                                                                           -3-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004



     1. Introduction
Globus Toolkit 2.4 is a GRID package for wide-area distributed systems. We have been assigned to
deploy and test it. Globus Toolkit is known for its wide use and its greatest features include scalability,
security and ease of use.

This technical memo also presents in detail the process of installing Mpich-G2 witch is a rewrite of
Mpich version 1 to work with an underlying Globus middleware. In fact, Mpich programs use Globus
to discover the resources needed and make use of the Grid environment.

This document is divided in two basic parts: first, the installation then the testing of the environment.

2. Tools description
2.1. GRID package

So-called computational Grids enable coupling and coordinated use of geographically distribuited
resources for such purposes as large-scale computation, distribuited data analysis, and remote
visualization. The development of adaptation of applications for Grid environments is made
challenging, however, by the often heterogenous nature of the resources involved in the facts that these
resources typically reside in different administrative domains, run different software, are subject to
different access control policies, and may be connected by networks with widely varying performance
characteristics.
The GRID package used is Globus Toolkit v. 2.4 from www.globus.org. It is widely used and offers
the perfect environment for developing grid applications. It is composed of three main parts: Resource
Management, Information Services, Data Management.


2.2. Additional tools
2.2.1. Mpich-G2
MPICH-G2 hides heterogenity by using Globus Toolkit services for such purposes as authentication,
authorization, executable staging, process creation, process monito-ring, process control,
communication, redi-rection of standard input and output, and remote file access.




                                                            -4-
                    Parallel and Distributed Group, Politehnica University of Bucharest
                                      UPB Grid Days 12-30 July, 2004



    3. Installation process

3.1. Testbed description
The testbed consists of 5 computers, connected through FastEthernet LAN connections. The computer
configuration is described in table 3.1.1. The computers layout is depicted in figure 3.1.1.

             ID Processor          RAM              HDD               OS
              1 Intel(R)           128Mb            20Gb              Debian Woody,
                Pentium(R) 4                                          kernel 2.4.25-1-
                CPU 1.50GHz                                           686
              2 Intel(R)           128Mb            20Gb              Debian Woody,
                Pentium(R) 4                                          kernel 2.4.25-1-
                CPU 1.50GHz                                           686
              3 Intel(R)           128Mb            20Gb              Debian Woody,
                Pentium(R) 4                                          kernel 2.4.25-1-
                CPU 1.50GHz                                           686
              4 Intel(R)           128Mb            20Gb              Debian Woody,
                Pentium(R) 4                                          kernel 2.4.25-1-
                CPU 1.50GHz                                           686
              5 Intel(R)           128Mb            20Gb              Debian Woody,
                Pentium(R) 4                                          kernel 2.4.25-1-
                CPU 1.50GHz                                           686
                         Table 3.1.1. Testbed computers configuration.




                               Figure 3.1.1. Testbed computers layout.


3.2. Prerequisites
The main package is the Globus Toolkit 2.4.3 GRID distribution. Additional packages are the library
dependencies. Every other package was downloaded from www.globus.org.




                                                  -5-
                    Parallel and Distributed Group, Politehnica University of Bucharest
                                      UPB Grid Days 12-30 July, 2004

    3.2.1. GRID package
We used an older version of the Globus Toolkit because the lastet stable release was incompatible with
Mpitch-G2. A FAQ for installing GT3.2, will be added at the end of this memo.

            Package               Globus Toolkit
            Site                  www.globus.org
            Version               2.4.3
            OS                    Linux
            Form                  Source
            Size                  n/a


3.2.2. Additional tools

        ID Name, site                Other info (size)        Description
         1 Globus Packaging          -                        A prerequisite for GT.
           Toolkit
           (www.globus.org)
         2 Simple CA                 -                        For small scale installation you can
           (www.globus.org)                                   use the Certification Authority
                                                              provided by Globus.


3.2.3. Additional files
We recommend also downloading the following files:

    ID Name, site                                                                    Description
     1 MPICH-G2: A Grid-Enable Implementation of the Message Passing                 Whitepaper
       Interface
     2 Globus Toolkit 2.2 MDS Technology Brief                                       Whitepaper
     3



3.3 Installation procedure
User setup
First, we must set up the users. We will use a user named globus for administrative purposes, and a
client account, named client.


              masina3:/opt/globus# groupadd -g 400 globus
              masina3:/opt/globus# useradd -c "Globus Admin" -d /home/globus -g
              globus -m -p globus -s /bin/bash -u 400 globus
              masina3:/opt/globus# useradd -c "Globus User" -d /home/client -g users
              -m -p globus -s /bin/bash -u 401 client




                                                           -6-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

     Installing Globus Packaging Toolkit 3.0.1 (GPT)
Our install started at Wed Jul 21 22:49:32 EEST 2004 and finished around
Wed Jul 21 23:03:44 EEST 2004.

First download GPT from http://www.globus.org/gt2.4/download.html, unzip and untar.


               masina3:~/globus/install# wget http://www-
               unix.globus.org/ftppub/gt2/2.4/2.4-latest/gpt/gpt-3.0.1-src.tar.gz
               masina3:~/globus/install# tar -xzvf /root/globus/gpt-3.0.1-src.tar.gz



We usually use pristine sources, for unclean uninstall, we will search the system for
added or modified files. This way we can later package our source compiled Grid Enviroment using
any tool we want.
To install Globus Packaging Toolkit you must specify its install path by setting the GPT_LOCATION
eviroment variable. After installing GPT we will find out what files have changed and witch files have
been added to the system and summarize them in /root/GPT-Installed


               masina3:~/globus/install# cd gpt-3.0.1/
               masina3:~/globus/install/gpt-3.0.1# find /* > /root/gpt1
               export GPT_LOCATION=/opt/globus/gpt-3.0.1
               ./build_gpt
               masina3:~/globus/install/gpt-3.0.1# find /* > /root/gpt2
               masina3:~/globus/install/gpt-3.0.1# diff /root/gpt1 /root/gpt2 >
               /root/GPT-Installed



we will use GPT_LOCATION later, so we will add the following lines in /etc/profile or
~/.profile if such file exists. If ~/.profile exists, then /etc/profile will NOT be processed
by default. Some systems, have an include line at the start of .profile. The examples work
for the sh shell.

GPT_LOCATION=/opt/globus/gpt-3.0.1
export GPT_LOCATION

Error Knowledge Base:
Problem: /usr/bin/ld: cannot find ldb
Answer: apt-get install libdb2-dev
       http://mail.gnome.org/archives/gnome-list/2001-January/msg00312.html
       File: GPT_KB

Problem: /usr/bin/ld: cannot find -lgdbm
Answer: am downloadat si instalat gdbm, vezi mai sus
       sau apt-get install libgdbm-dev libdb-dev

Please watch out for any warnings of unknown libraries, these are important messages.
Remember to run ldconfig after installing new libraries.

The apt-get tool is only avaible on Debian, please use the command provided by your



                                                   -7-
                      Parallel and Distributed Group, Politehnica University of Bucharest
                                        UPB Grid Days 12-30 July, 2004

    OS to install the necessary package. In the Annex of this memmo you will also find an
example installation of such libraries.


Installing Globus Toolkit 2.4
For installing Globus Toolkit 2.4 (GT), you must download the following files from
http://www.globus.org/gt2.4/download.html.

               globus-data-management-client-2.4.3-src_bundle.tar.gz
               globus-data-management-server-2.4.3-src_bundle.tar.gz
               globus-data-management-sdk-2.4.3-src_bundle.tar.gz
               globus-information-services-client-2.4.3-src_bundle.tar.gz
               globus-information-services-server-2.4.3-src_bundle.tar.gz
               globus-information-services-sdk-2.4.3-src_bundle.tar.gz
               globus-resource-management-client-2.4.3-src_bundle.tar.gz
               globus-resource-management-server-2.4.3-src_bundle.tar.gz
               globus-resource-management-sdk-2.4.3-src_bundle.tar.gz


This can be done by using wget:

               wget http://www-unix.globus.org/ftppub/gt2/2.4/2.4-
               latest/bundles/src/globus-resource-management-sdk-2.4.3-
               src_bundle.tar.gz



or you can use this script.

               #!/bin/sh

               DOWNLOAD_LIST='
               globus-data-management-client-2.4.3-src_bundle.tar.gz
               globus-data-management-server-2.4.3-src_bundle.tar.gz
               globus-data-management-sdk-2.4.3-src_bundle.tar.gz
               globus-information-services-client-2.4.3-src_bundle.tar.gz
               globus-information-services-server-2.4.3-src_bundle.tar.gz
               globus-information-services-sdk-2.4.3-src_bundle.tar.gz
               globus-resource-management-client-2.4.3-src_bundle.tar.gz
               globus-resource-management-server-2.4.3-src_bundle.tar.gz
               globus-resource-management-sdk-2.4.3-src_bundle.tar.gz'

               for x in $DOWNLOAD_LIST; do
                 echo Incep sa downloadez $x

                 echo
               ________________________________________________________
                 wget http://www-unix.globus.org/ftppub/gt2/2.4/2.4-
               latest/bundles/src/$x
                 echo
               ________________________________________________________
               done
               echo Done.




                                                             -8-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

      Globus Toolkit uses a PKI based security system, so you must have the cappability
of issuing and signing digital certificates. Globus Toolkit comes with a package
that allows Globus.org to sign your requests by mail. We will not use this, we will
use our own Certificate Authority instead.

Download globus_simple_ca_bundle-latest.tar.gz

               wget http://www-
               unix.globus.org/ftppub/gsi/simple_ca/globus_simple_ca_bundle-
               latest.tar.gz



Additional instructions for installing Simple CA can be found at:
http://www.globus.org/security/simple-ca.html
After downloading the necessary files, we must build and install them. First we must prepare our
enviorment:


               masina2:/opt/globus# export GLOBUS_LOCATION=/opt/globus/gt2.4
               masina2:/opt/globus# export GPT_LOCATION=/opt/globus/gpt3.0.1



Like before we will also add these lines in /etc/profile and ~/.profile

GLOBUS_LOCATION=/opt/globus/gt2.4
export GLOBUS_LOCATION

Technically we must build and install each of the downloaded files using Globus Packaging Toolkit.
This would be done like this:


               /opt/globus/gpt-3.0.1/sbin/gpt-build /root/globus/kituri/globus-data-
               management-sdk-2.4.3-src_bundle.tar.gz gcc32db



Because time is money, we will use the following script. Feel free to change the variables
to your needs.


               #!/bin/bash

               GPT_LOCATION=/opt/globus/gpt-3.0.1
               GLOBUS_LOCATION=/opt/globus/gt2.4
               LOG_LOCATION=/root/globus/docs/logs
               SRC_LOCATION=/root/globus/kituri/gt2.4

               INSTALL_LIST='
               globus-data-management-client-2.4.3-src_bundle.tar.gz
               globus-data-management-server-2.4.3-src_bundle.tar.gz
               globus-data-management-sdk-2.4.3-src_bundle.tar.gz
               globus-information-services-client-2.4.3-src_bundle.tar.gz
               globus-information-services-server-2.4.3-src_bundle.tar.gz




                                                   -9-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

                    globus-information-services-sdk-2.4.3-src_bundle.tar.gz
                    globus-resource-management-client-2.4.3-src_bundle.tar.gz
                    globus-resource-management-server-2.4.3-src_bundle.tar.gz
                    globus-resource-management-sdk-2.4.3-src_bundle.tar.gz'

                    export GPT_LOCATION GLOBUS_LOCATION

                    for x in $INSTALL_LIST; do
                      PERSONAL_LOG=$LOG_LOCATION/$x-Install
                      echo Incep instalarea a $x la data de `date` > $PERSONAL_LOG

                      echo
                    ________________________________________________________

                      echo $GPT_LOCATION/sbin/gpt-build $SRC_LOCATION/$x gcc32dbg \
                       -logdir $LOG_LOCATION
                      $GPT_LOCATION/sbin/gpt-build $SRC_LOCATION/$x gcc32dbg \
                       -logdir $LOG_LOCATION
                      echo Am terminat instalarea $x la data de `date` > $PERSONAL_LOG
                      echo
                    ________________________________________________________
                    done



This will create logfiles of the Globus Packaging Toolkit at LOG_LOCATION for later inspection.
Aditionally, this script will create some files *-Install wich will contain timing information (start/stop
install) on the build status for each package.

After the build is complete, we must finish some parts by hand.


           masina2:/opt/globus/gt2.4#      export GLOBUS_LOCATION=/opt/globus/gt2.4
           masina2:/opt/globus/gt2.4#      export GPT_LOCATION=/opt/globus/gpt-3.0.1
           masina2:/opt/globus/gt2.4#      source /opt/globus/gt2.4/etc/globus-user-env.sh
           masina2:/opt/globus/gt2.4#      /opt/globus/gpt-3.0.1/sbin/gpt-postinstall

           running /opt/globus/gt2.4/setup/globus/setup-globus-common...
           creating globus-sh-tools-vars.sh
           creating globus-script-initializer
           creating Globus::Core::Paths
           checking globus-hostname
           Done
           running /opt/globus/gt2.4/setup/globus/setup-globus-gatekeeper...
           Creating gatekeeper configuration file...
           Done
           Creating grid services directory...
           Done
           running /opt/globus/gt2.4/setup/globus/setup-globus-mds-common...

           Creating.../opt/globus/gt2.4/etc/grid-info.confDonerunning
           /opt/globus/gt2.4/setup/globus/setup-globus-mds-gris...

           Creating.../opt/globus/gt2.4/sbin/SXXgris
           /opt/globus/gt2.4/libexec/grid-info-script-initializer
           /opt/globus/gt2.4/libexec/grid-info-mds-core




                                                           -10-
    Parallel and Distributed Group, Politehnica University of Bucharest
                      UPB Grid Days 12-30 July, 2004

/opt/globus/gt2.4/libexec/grid-info-common
/opt/globus/gt2.4/libexec/grid-info-cpu*
/opt/globus/gt2.4/libexec/grid-info-fs*
/opt/globus/gt2.4/libexec/grid-info-mem*
/opt/globus/gt2.4/libexec/grid-info-net*
/opt/globus/gt2.4/libexec/grid-info-platform*
/opt/globus/gt2.4/libexec/grid-info-os*
/opt/globus/gt2.4/etc/grid-info-resource-ldif.conf
/opt/globus/gt2.4/etc/grid-info-resource-register.conf
/opt/globus/gt2.4/etc/grid-info-resource.schema
/opt/globus/gt2.4/etc/grid.gridftpperf.schema
/opt/globus/gt2.4/etc/gridftp-resource.conf
/opt/globus/gt2.4/etc/gridftp-perf-info
/opt/globus/gt2.4/etc/grid-info-slapd.conf
/opt/globus/gt2.4/etc/grid-info-site-giis.conf
/opt/globus/gt2.4/etc/grid-info-site-policy.conf
/opt/globus/gt2.4/etc/grid-info-server-env.conf
/opt/globus/gt2.4/etc/grid-info-deployment-comments.confDonerunning
/opt/globus/gt2.4/setup/globus/setup-ssl-utils...setup-ssl-utils:
Configuringssl-utilspackageRunningsetup-ssl-utils-sh-scripts...

**********************************************************

Note: To complete setup of the GSI software you need to run the following
script as root to configure your security configuration directory:

/opt/globus/gt2.4/setup/globus/setup-gsi

For further information on using the setup-gsi script, use the –help option. The
–nonroot can be used on systems where root access is not available.

****************************************************************
***********

setup-ssl-utils:
Complete

running
/opt/globus/gt2.4/setup/globus/setup-globus-gram-job-manager...
Creating state file directory.
Done.
Reading gatekeeper configuration file...
Determining system information...
Creating job manager configuration file...
Done running
/opt/globus/gt2.4/setup/globus/setup-globus-job-manager-fork...
configure:
warning:
Cannot locate mpirun loading cache ./config.cache checking for
mpirun... no
updating cache ./config.cache
creating ./config.status
creating fork.pm

masina2:/opt/globus/gt2.4#




                                 -11-
                    Parallel and Distributed Group, Politehnica University of Bucharest
                                      UPB Grid Days 12-30 July, 2004


Now we must install our Certificate Authority (CA). We will use the simple CA package.
The CA will be installed on one machine only. After the install is complete, you will
distribuite a package created by your CA to the other machines so they can submit their
certificate signing requests.


              /opt/globus/gpt-3.0.1/sbin/gpt-build globus_simple_ca_3cf5d926_setup-
              0.13.tar.gz gcc32dbg -logdir=/root/globus/docs/logs

              /opt/globus/gt2.4/setup/globus_simple_ca_3cf5d926_setup/setup-gsi
              setup-gsi: Configuring GSI security
              Installing /etc/grid-security/certificates//grid-security.conf.3cf5d926...
              Running grid-security-config...
              Error running grid-security-config. Aborting. at
              /opt/globus/gt2.4/setup/globus_simple_ca_3cf5d926_setup/setup-gsi.pl
              line 152.



On some machines we encountered the following problem:
Error running grid-security-config. Aborting. at
/opt/globus/gt2.4/setup/globus_simple_ca_3cf5d926_setup/setup-gsi.pl line 152.

To overcome this we edited some files like this:

              cd /opt/globus/gt2.4/setup/globus_simple_ca_3cf5d926_setup
              cp grid-security-config.in grid-security-config
              vim grid-security-config
                  - Delete the first line and write #!/bin/sh
              cp grid-cert-request-config.in grid-cert-request-config
              vim grid-cert-request-config
                  - Delete the first line and write #!/bin/sh

              /opt/globus/gt2.4/setup/globus_simple_ca_3cf5d926_setup/setup-gsi

              setup-gsi: Configuring GSI security
              Installing /etc/grid-security/certificates//grid-security.conf.3cf5d926...
              Running grid-security-config...


              GSI     :   CONFIGURATION                 PROCEDURE


              Before you use the Grid Security Infrastructure, you should first
              define the DN (distinguished name) that should be used for your
              organization's X509 certificates. If you do not define a DN,
              a default DN will be assigned to you.

              This script will ask some questions about site specific
              information. This information is used to configure
              the Grid Security Infrastructure for your site.

              For some questions, a default response is given in [].
              Pressing RETURN in response to such a question will enable the default.
              This script will overwrite the file --



                                                          -12-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


                    /etc/grid-security/certificates//grid-security.conf.3cf5d926

Voilla! It worked. As we said Globus Toolkit comes with a default CA.
To be sure you use the same CA all over the installation site, run
The

               grid-default-ca



command and select the right CA from the list. In this case you should select
something like 3cf5d926. This is the hash of your CA.
Now we will verify that everything is ok ...

For additional steps in verifying the installation please also check
http://www.globus.org/gt2.4/install.html#verify



Testing GRAM
To run these tests on a single host, you will need both the Client and Server Resource Management
bundles installed. If you want to test a client-only install, you will need to have a server available to
test against, and if you want to test server-only, you will need a client available somewhere.
When you have a user certificate, you can use the following tests to verify a working installation. Don't
forget to set your environment.
First launch a gatekeeper by running the following (as yourself, not root):

               % grid-proxy-init -debug -verify
               % globus-personal-gatekeeper –start

This command will output a contact string like "hostname:4589:/O=Grid/O=Globus/CN=Your Name".
Substitute that contact string for <contact> in the following command:

               % globus-job-run <contact> /bin/date

You should see the current date and time. At this point you can stop the personal gatekeeper and
destroy your proxy with:

             % globus-personal-gatekeeper -killall
             % grid-proxy-destroy



Please note that the above instructions are just for testing, and do not install a fully functioning
gatekeeper on your machine for everyone to use. Installing a system-level gatekeeper for everyone to
use will be covered in the configuration section of this guide. In the future, you will not have to specify
a full contact string for the gatekeeper. Just the hostname is sufficient, if it is being started through
inetd/xinetd.




                                                  -13-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

     Testing MDS
Mpich-G2 does not use MDS directly. Mpich uses a file named machines for resource location. This
file is written by hand and MDS is the mechanism used to discover avaible resources. If we already
know the location and number of processors avaible we don’t need a MDS.

Don't forget to set your environment.

Configuration of the MDS 2.4 release requires the following basic steps:

 1. Acquire LDAP certificate
 2. Start MDS
 3. Send a test query to GRIS and GIIS


Acquire LDAP certs

See the Acquire LDAP certificate section from above. The certificate is required for non-anonymous
access. Configuring non-anonymous access is described in the configuration guide.

Start MDS

Start MDS 2.4 with the following command:

 % GLOBUS_LOCATION/sbin/globus-mds start

This command starts the OpenLDAP 2.0 slapd server for the GRIS. It does not require environment
variables $GLOBUS_LOCATION to be set.

Send a test query to GRIS and GIIS

Send a test query to GRIS on a local host, with the following command:

 % GLOBUS_LOCATION/bin/grid-info-search -anonymous -L

Note that this test does not require you to wait for the LDAP certificate before performing the test,
because it uses the '-anonymous' flag. If you want to disable anonymous access to MDS, see the
configuration section of this guide.

If you have any questions, try the MDS FAQ.

If everything works ok, we should now complete the installation of Globus Toolkit running the
services via xinetd or inetd. The webpage that covers this topic is http://www-
unix.globus.org/toolkit/docs/3.2/installation/install_config_prews.html


Setting up a GridFTP Server
Websource:
http://www-unix.globus.org/toolkit/docs/3.2/installation/install_config_gridftp.html




                                                           -14-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

     Prerequisites:
Before you can configure a GridFTP server, you must have:
- a host certificate
- appropriate users in the grid-mapfile

Configure GridFTP server daemon
In order to use GridFTP, you need to configure your machine to automatically start the GridFTP server
daemon. As root, add the following entry to /etc/services:


               gsiftp 2811/tcp



Configure Inetd/Xinetd
For Inetd, open /etc/inetd.conf and add the following entry.


               gsiftp stream tcp nowait root /usr/bin/env env
               LD_LIBRARY_PATH=GLOBUS_LOCATION/lib
               GLOBUS_LOCATION/sbin/in.ftpd -l -a -G GLOBUS_LOCATION



Enter the entire string in one line (disregard the word wrapping above). Be sure to replace
GLOBUS_LOCATION with the actual value of $GLOBUS_LOCATION in your environment.

New to 2.2: This entry has changed from the entry provided for the GridFTP server in the Globus
Toolkit 2.0 Administrator's Guide. The reason is that if you followed the instructions from the install
section, you do not have a static in.ftpd. This requires you to set the LD_LIBRARY_PATH so that the
server can dynamically link against the libraries in $GLOBUS_LOCATION/lib. To accomplish the
setting of the environment variable in inetd, we use /usr/bin/env (the location may vary on your
system) to first set LD_LIBRARY_PATH, and then to call in.ftpd itself.

The advantage of this setup is that when you apply a security update to your installation, the GridFTP
server will pick it up dynamically without your having to rebuild it.

For Xinetd, add a file called grid-ftp to the /etc/xinetd.d/ directory with the following contents:


               service gsiftp
               {
                       instances     = 1000
                       socket_type    = stream
                       wait        = no
                       user        = root
                       env         = LD_LIBRARY_PATH=GLOBUS_LOCATION/lib
                       server       = GLOBUS_LOCATION/sbin/in.ftpd
                       server_args    = -l -a -G GLOBUS_LOCATION
                       log_on_success += DURATION
                       nice        = 10
                       disable      = no
               }




                                                  -15-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


Notify Inetd or Xinetd that its configuration file has changed. To do this, follow the instructions for the
server as listed in the manual (man inetd or man xinetd). It will probably be something like
/etc/init.d/xinetd reload.

Be sure to replace GLOBUS_LOCATION with the actual value of $GLOBUS_LOCATION in your
environment.

Testing GridFTP – Stage 2

Testing GridFTP consists of:
       - Starting a GridFTP server (steps 2-3 above)
       - Creating a proxy (step 4)
       - Moving a test file (step 5)

Create a proxy certificate:

               % grid-proxy-init -verify -debug



Create a file named /tmp/file1, and run the following command:

               % globus-url-copy gsiftp://localhost/tmp/file1 \
                 file:///tmp/file2



Check to make sure that /tmp/file2 now exists. You may look in /var/log/messages to see any messages
the GridFTP daemon may have logged about the transfer


That’s about it with installing Globus Toolkit, we will next focus our attention on installing MPICH-
G2.


Installing MPICH-G2

Webpage: http://www3.niu.edu/mpi
First thing to say is that binary distributions of Globus are not supported with MPICH.

   1. Before configuring MPICH-G2, you will need to verify that the GLOBUS_LOCATION
      variable is set.
   2. Aquire MPICH v.1.2.3 from http://www.mcs.anl.gov/mpi/mpich/download.html
   3. uncompress/untar with


               # gunzip –c mpich.tar.gz | tar xvf -

   4. Configure MPICH specifying the globus2 device.




                                                           -16-
                      Parallel and Distributed Group, Politehnica University of Bucharest
                                        UPB Grid Days 12-30 July, 2004

                    # cd mpich
                    # ./configure –device=globus2:-flavor=gcc32dbg




How to use MPICH-G2
Before using MPICH-G2 you must have already acquired your Globus security credentials. Then, on
each machine you intend to run your MPI application,


  -     you must have an account;
  -     Globus v1.1.4 or later and MPICH-G2 v1.2.1 or later must be installed;
  -     on those machines that you intend to type MPICH-G2's mpirun and that are running Globus v2.0
        or later, you must do one of the following at least once before running your application;
             source $GLOBUS_LOCATION/etc/globus-user-env.csh, or
             $GLOBUS_LOCATION/etc/globus-user-env.sh
  -     a Globus gatekeeper (a daemon), configured with at least one jobmanager service, must be
        running; and
  -     you must be a registered Globus user by having your Globus ID (part of your Globus security
        credentials) placed into the Globus gridmap file by the local Globus administrator.

Once these are done, you are ready to compile and execute your MPI application using MPICH-G2 by
following these steps:

 1. Compile your application on each machine you intend to run using one of the MPICH-G2
    compilers:

      C Compiler               <MPICH_INSTALL_PATH>/bin/mpicc
      C++ Compiler             <MPICH_INSTALL_PATH>/bin/mpiCC
      Fortran77 Compiler       <MPICH_INSTALL_PATH>/bin/mpif77
      Fortran90 Compiler       <MPICH_INSTALL_PATH>/bin/mpif90

     Of course, if you are planning to run only on a cluster of binary-compatible workstations that share
a filesystem, it suffices to compile your program only once.

 2. Launch your application using MPICH-G2 mpirun. Every mpirun command under the globus2
    device submits a Globus Resource Specification Language Script, or simply RSL script, to a
    Globus-enabled grid of computers. Each RSL script is composed of one or more RSL subjobs,
    typically one subjob for each machine in the computation. You may supply your own RSL script
    mpirun, or you may have mpirun construct an RSL script for you based on the arguments you
    pass to mpirun and the contents of your machines file (discussed below). In either case, it is
    important to remember that communication between nodes in different subjobs is always done
    over TCP/IP, while the more efficient vendor-supplied MPI is used only among nodes within the
    same subjob.

    You may terminate the entire job by hitting cntrl-c in your mpirun window. Be careful to hit cntrl-c
only once, as hitting multiple times will foil clean termination. Be patient; terminating all the processes
on all the machines cleanly can sometimes take a few minutes.

  -     Using mpirun to construct an RSL script for you



                                                   -17-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


       You would use mpirun if you wanted to launch a single executable file, which implies a set of
one or more binary-compatible machines that all share the same filesystem (i.e., they can all access the
executable file).

         Using mpirun to construct an RSL script for you requires a machines file. The mpirun
command determines which machines file to use as follows:
           1. If a -machinefile argument is specified to mpirun, it uses that; otherwise,
           2. It looks for a file named "machines" in the directory in which you typed mpirun; and
finally,
           3. it looks for <MPICH_INSTALL_PATH>/bin/machines.
         If it cannot find a machines file from any of those places, then mpirun fails.

        The machines file is used to list the computers upon which you wish to run your application.
Computers are listed by naming the Globus jobmanager service on that machine. For most Globus
installations, the default jobmanager service can be used, which requires specifying only the fully
qualified domain name. Consult your local Globus administrator for the name of your Globus
jobmanager service.

        Consider the following example in which we present a pair of fictitious binary-compatible
machines, {m1,m2}.utech.edu, that have access to the same filesystem. Here is what a machines file
that uses the default Globus jobmanager service on each machine might look like.

                 "m1.utech.edu" 10
                 "m2.utech.edu" 5


        The number appearing at the end of each line is optional (default=1). It specifies the maximum
number of nodes that can be created in a single RSL subjob on each machine. mpirun uses the -np
specification by "wrapping around" the machines file. For example, using the machines file above
mpirun -np 8 creates an RSL consisting of a single subjob with 8 nodes on m1.utech.edu; while mpirun
-np 12 creates an RSL with two subjobs where the first subjob has 10 nodes on m1.utech.edu and the
second has 2 nodes on m2.utech.edu; and finally mpirun -np 17 creates an RSL with three subjobs with
10 nodes on m1.utech.edu, followed by 5 nodes on m2.utech.edu, and ending with 2 nodes on
m1.utech.edu again.

       Note that intersubjob messaging is always communicated over TCP, even if the two separate
subjobs are the same machine.

          Using mpirun by supplying your own RSL script

        You would use mpirun supplying your own RSL script if you were submitting to a set of
machines that could not run or access the same executable file (e.g., machines that are not binary
compatible and do not share a file system). In this situation, you must use the Globus Resource
Specification Language (RSL) to write an RSL script specifying the executable filename for each
machine. The RSL scripting language is very flexible but can be rather complex. Here are some rules
that you must follow when writing your own RSL for MPICH-G2 applications. Note that not all these
rules are not required for all RSL scripts, only for MPICH-G2 applications.




                                                           -18-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

              o Your RSL script must be a multirequest, which requires that the first nonwhitespace
character must be "+".
          o    Each     subjob     must     name       a      Globus     jobmanager      service      using
(resourceManagerContact="<globus_jm_service>").             For       most       Globus       installations
<globus_jm_service> will simply be the fully qualified domain name of the machine the subjob will be
executed on.
          o Each subjob requires a unique index, starting with 0 and counting up consecutively from
there. The unique index must appear in two places in each subjob; (label="subjob 0") and
(environment=(GLOBUS_DUROC_SUBJOB_INDEX 0)).
          o For those subjobs running on machines equipped with vendor-supplied implementations of
MPI and MPICH-G2 was configured by specifying an 'mpi' flavor of Globus, the line (jobtype=mpi)
must appear.
          o Some sites require you to specify a 'project' to their scheduler for accounting purposes. For
each machine where such a requirement exists, add (project=xxx) to the subjob.

       The easiest way to write your own RSL request is to modify one generated for you by mpirun.
Specifying -dumprsl on the mpirun command prints the generated RSL and does not launch the
program.

       Consider our previous example in which we wanted to run an application on a cluster of
workstations. Recall that our machines file looked like this:

                  "m1.utech.edu" 10
                  "m2.utech.edu" 5



       Using mpirun with -dumprsl


               # mpirun -dumprsl -np 12 myapp 123 456

       produces the following output (but does not launch the application):

                  +
                  ( &(resourceManagerContact="m1.utech.edu")
                    (count=10)
                    (jobtype=mpi)
                    (label="subjob 0")
                    (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0))
                    (arguments=" 123 456")
                    (directory=/homes/users/smith)
                    (executable=/homes/users/smith/myapp)
                  )
                  ( &(resourceManagerContact="m2.utech.edu")
                    (count=2)
                    (jobtype=mpi)
                    (label="subjob 1")
                    (environment=(GLOBUS_DUROC_SUBJOB_INDEX 1))
                    (arguments=" 123 456")
                    (directory=/homes/users/smith)
                    (executable=/homes/users/smith/myapp)




                                                  -19-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004

                      )



       Additional environment variables may be added as in the example below:

                 +
                 ( &(resourceManagerContact="m1.utech.edu")
                   (count=10)
                   (jobtype=mpi)
                   (label="subjob 0")
                   (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0)
                            (MY_ENV 246))
                   (arguments=" 123 456")
                   (directory=/homes/users/smith)
                   (executable=/homes/users/smith/myapp)
                 )
                 ( &(resourceManagerContact="m2.utech.edu")
                   (count=2)
                   (jobtype=mpi)
                   (label="subjob 1")
                   (environment=(GLOBUS_DUROC_SUBJOB_INDEX 1))
                   (arguments=" 123 456")
                   (directory=/homes/users/smith)
                   (executable=/homes/users/smith/myapp)
                 )




       After creating your RSL file you may submit it directly to mpirun as follows:

              # mpirun -globusrsl my.rsl

      Note that when supplying your own RSL, it should be the only argument you specify to
mpirun.

        By default all stdout and stderr will appear on the screen from which you typed the mpirun
command. This can be changed by specifying specific filenames with (stdout=myapp.out) and/or
(stderr=myapp.err) in your RSL script.


 3.4 Example MPI Application

Here is an example MPI application, ring.c, and its associated Makefile.
  * ring.c
  * Makefile

Edit the Makefile by changing MPICH_INSTALL_PATH to your MPICH-G2 installation directory.
After editing the Makefile and following the steps in the preceding section Once it's installed, how do I
use MPICH-G2?, type the following:




                                                           -20-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


                 # make ring
                 # <MPICH_INSTALL_PATH>/bin/mpirun -np 4 ring



You should see the following output:


               Master: end of trip 1 of 1: after receiving passed_num=4 (should be
               =trip*numprocs=4) from source=3



Here are the contents of ring.c

               #include   <stdio.h>
               #include   <stdlib.h>
               #include   <string.h>
               #include   <mpi.h>

               /* command line configurables */
               int Ntrips; /* -t <ntrips> */
               int Verbose; /* -v */

               int parse_command_line_args(int argc, char **argv, int my_id)
               {

                  int i;
                  int error;

                  /* default values */
                  Ntrips = 1;
                  Verbose = 0;

                  for (i = 1, error = 0; !error && i < argc; i ++)
                  {
                     if (!strcmp(argv[i], "-t"))
                     {
                         if (i + 1 < argc && (Ntrips = atoi(argv[i+1])) > 0)
                             i ++;
                         else
                             error = 1;
                     }
                     else if (!strcmp(argv[i], "-v"))
                         Verbose = 1;
                     else
                         error = 1;

                  } /* endfor */

                  if (error && !my_id)
                  {
                      /* only Master prints usage message */
                      fprintf(stderr, "\n\tusage: %s {-t <ntrips>} {-v}\n\n", argv[0]);
                      fprintf(stderr, "where\n\n");
                      fprintf(stderr,




                                                  -21-
 Parallel and Distributed Group, Politehnica University of Bucharest
                   UPB Grid Days 12-30 July, 2004

      "\t-t <ntrips>\t- Number of trips around the ring. "
      "Default value 1.\n");
     fprintf(stderr,
        "\t-v\t\t- Verbose. Master and all slaves log each step. \n");
     fprintf(stderr, "\t\t\t Default value is FALSE.\n\n");
  } /* endif */

  return error;

} /* end parse_command_line_args() */

int main(int argc, char **argv)
{

  int numprocs, my_id, passed_num;
  int trip;
  MPI_Status status;

  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
  MPI_Comm_rank(MPI_COMM_WORLD, &my_id);

  if (parse_command_line_args(argc, argv, my_id))
  {
      MPI_Finalize();
      exit(1);
  } /* endif */

  if (Verbose)
      printf("my_id %d numprocs %d\n", my_id, numprocs);

  if (numprocs > 1)
  {
      if (my_id == 0)
      {
          /* I am the Master */

         passed_num = 0;

         for (trip = 1; trip <= Ntrips; trip ++)
         {
            passed_num ++;

            if (Verbose)
                printf("Master: starting trip %d of %d: "
                    "before sending num=%d to dest=%d\n",
                    trip, Ntrips, passed_num, 1);


            MPI_Send(&passed_num, /* buff */
                1,         /* count */
                MPI_INT,      /* type */
                1,         /* dest */
                0,         /* tag */
                MPI_COMM_WORLD); /* comm */

            if (Verbose)



                                       -22-
 Parallel and Distributed Group, Politehnica University of Bucharest
                   UPB Grid Days 12-30 July, 2004

              printf("Master: inside trip %d of %d: "
                   "before receiving from source=%d\n",
                   trip, Ntrips, numprocs-1);

            MPI_Recv(&passed_num, /* buff */
                1,         /* count */
                MPI_INT,      /* type */
                numprocs-1,     /* source */
                0,         /* tag */
                MPI_COMM_WORLD, /* comm */
                &status);    /* status */

            printf("Master: end of trip %d of %d: "
              "after receiving passed_num=%d "
              "(should be =trip*numprocs=%d) from source=%d\n",
              trip, Ntrips, passed_num, trip*numprocs, numprocs-1);
         } /* endfor */
     }
     else
     {
        /* I am a Slave */

         for (trip = 1; trip <= Ntrips; trip ++)
         {
            if (Verbose)
                printf("Slave %d: top of trip %d of %d: "
                    "before receiving from source=%d\n",
                    my_id, trip, Ntrips, my_id-1);

            MPI_Recv(&passed_num, /* buff */
                1,         /* count */
                MPI_INT,      /* type */
                my_id-1,      /* source */
                0,         /* tag */
                MPI_COMM_WORLD, /* comm */
                &status);    /* status */

            if (Verbose)
                printf("Slave %d: inside trip %d of %d: "
                    "after receiving passed_num=%d from source=%d\n",
                    my_id, trip, Ntrips, passed_num, my_id-1);

            passed_num ++;

         if (Verbose)
             printf("Slave %d: inside trip %d of %d: "
                 "before sending passed_num=%d to dest=%d\n",
                 my_id, trip, Ntrips, passed_num,
(my_id+1)%numprocs);


            MPI_Send(&passed_num,      /* buff */
                1,           /* count */
                MPI_INT,        /* type */
                (my_id+1)%numprocs, /* dest */
                0,           /* tag */
                MPI_COMM_WORLD); /* comm */



                              -23-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


                                 if (Verbose)
                                    printf("Slave %d: bottom of trip %d of %d: "
                                         "after send to dest=%d\n",
                                         my_id, trip, Ntrips, (my_id+1)%numprocs);
                              } /* endfor */
                           } /* endif */
                      }
                      else
                         printf("numprocs = %d, should be run with numprocs > 1\n",
                    numprocs);

                       MPI_Finalize();

                       exit(0);

                    } /* end main() */




And here are the contents of Makefile

                  #
                  # assumes MPICH-G2 was installed in /usr/local/mpich
                  #

                  MPICH_INSTALL_PATH             = /usr/local/mpich

                  ring: force
                        $(MPICH_INSTALL_PATH)/bin/mpicc -o ring ring.c

                  force:

                  clean:
                       /bin/rm -rf *.o ring


That’s it. It should work if everything is ok.


3.5. Testing procedure
During this installation report, there were numerous tests made because of the secvential nature of the
installation. If for example the certificates part did not work, neither did gram, so we insisted that this
report should follow an install-test phase at each part.
If the last test, the example MPI application did not work, you should try the following example. It
tests only Globus connectivity and if this test fails, then the problem lies in the Globus part of the
installation and you should check that.

Below is our Globus version of Kernighan and Ritchie's "hello, world" program, accompanied by
instructions to make and run it. In the same spirit as K&R presented their program, we offer ours as a
very small (minimal?) program designed to flush out all the details of installing and deploying Globus,
acquiring Globus security credentials, registering yourself as Globus a user on each machine, etc.




                                                            -24-
                      Parallel and Distributed Group, Politehnica University of Bucharest
                                        UPB Grid Days 12-30 July, 2004


The instructions below are intended to test one machine at a time. If you are planning to run your
MPICH-G2 application on many different machines, you should start by following the instructions
below on one machine at a time.

Here is the contents of hello.c and Makefile for your review.
hello.c :

           #include <globus_duroc_runtime.h>

           int main(int argc, char **argv)
           {

           #if defined(GLOBUS_CALLBACK_GLOBAL_SPACE)
              globus_module_set_args(&argc, &argv);
           #endif

               globus_module_activate(GLOBUS_DUROC_RUNTIME_MODULE);
               globus_duroc_runtime_barrier();
               globus_module_deactivate(GLOBUS_DUROC_RUNTIME_MODULE);

               printf("hello, world\n");

           }



Here is the contents of Makefile for your review. Download by using the link above.

Makefile
           #
           # It is assumed that you have created file called "makefile_header"
           # using following command substituting "" for a particular
           # flavor of your Globus v2.0 or later installation:
           #
           #     $GLOBUS_LOCATION/sbin/globus-makefile-header -flavor= \
           #        globus_common globus_gram_client globus_io globus_data_conversion \
           #        globus_duroc_runtime globus_duroc_bootstrap > makefile_header
           #
           #

           RM = /bin/rm

           ###################################################
           ###################################################
           #
           # The rest of the file should _not_ change.
           #
           ###################################################
           ###################################################

           include makefile_header

           hello:




                                                   -25-
                       Parallel and Distributed Group, Politehnica University of Bucharest
                                         UPB Grid Days 12-30 July, 2004

                   $(GLOBUS_CC) $(GLOBUS_CFLAGS) $(GLOBUS_INCLUDES) -c hello.c
                   $(GLOBUS_LD) -o hello hello.o \
                   $(GLOBUS_LDFLAGS) \
                   $(GLOBUS_PKG_LIBS) \
                   $(GLOBUS_LIBS)


               clean:
                   $(RM) -rf *.o hello



Before using the Makefile you must create a file called makefile_header using the Globus tool globus-
makefile-header specifying one of the Globus flavors at your installation. You should select the same
Globus flavor you intend to use when configuring MPICH-G2. Here is an example of how you must
use globus-makefile-header to create makefile_header specifying a gcc32dbg as the flavor:


         % $GLOBUS_LOCATION/sbin/globus-makefile-header -flavor=gcc32dbg \
         globus_common globus_gram_client globus_io globus_data_conversion \
         globus_duroc_runtime globus_duroc_bootstrap > makefile_header




 1. Write hello.c and Makefile.
 2. Use globus-makefile-header to create the file makefile_header as described above.
 3. Compile the hello.c. NOTE: You are not using MPICH-G2's mpicc.


         % make hello



 4. Write your own RSL file called hello.rsl.
     * If you are not using an MPI flavor of Globus then your RSL file should look like this:

       hello.rsl
                   +
                   ( &(resourceManagerContact="m1.utech.edu")
                     (count=2)
                     (label="subjob 0")
                     (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0)
                        (LD_LIBRARY_PATH /usr/local/globus/lib/))
                     (directory=/homes/users/smith)
                     (executable=/homes/users/smith/hello)
                   )



        * If you are using an MPI flavor of Globus then you must add (jobtype=mpi) to your RSL file so
that it looks like this:

       hello.rsl
                   +




                                                             -26-
                    Parallel and Distributed Group, Politehnica University of Bucharest
                                      UPB Grid Days 12-30 July, 2004

                     ( &(resourceManagerContact="m1.utech.edu")
                       (count=2)
                       (jobtype=mpi)
                       (label="subjob 0")
                       (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0)
                          (LD_LIBRARY_PATH /usr/local/globus/lib/))
                       (directory=/homes/users/smith)
                       (executable=/homes/users/smith/hello)
                     )


    In either case, change resourceManagerContact to your machine, change /usr/local/globus of the
LD_LIBRARY_PATH environment variable to the GLOBUS_LOCATION on that machine (note the
value for LD_LIBRARY_PATH still ends with /lib/), and change directories in directory and
executable to point to your directory.

 5. Setup your Globus environment using

                  % source $GLOBUS_LOCATION/etc/globus-user-env.csh

 6. Run the program using your hello.rsl and globusrun. NOTE: your are not using MPICH-G2's
mpirun. You should see the following output.

                  % $GLOBUS_LOCATION/bin/globusrun -w -f hello.rsl
                  hello, world
                  hello, world
                  %



If hello.c compiled without any errors but did not run correctly, then the problem is not with MPICH-
G2 nor its installation. It is most likely a Globus-related problem. Start by contacting your local
Globus administrator and, if necessary, continue by checking the Globus Toolkit Error FAQ.

Testing notes were written as problems occurred.




                                                 -27-
                     Parallel and Distributed Group, Politehnica University of Bucharest
                                       UPB Grid Days 12-30 July, 2004


    3.6. Installation quick-list

This quick-list will assist you into installing Globus Toolkit 2.4 and MPICH-G2. Just check all steps,
while you progress through the installation process.

      Check      Step    Description
      [ ]        1       User setup
      [ ]        2       Install Globus Packaging Toolkit
      [ ]        3       Install Globus Toolkit 2.4
      [ ]        3.1.    Configure GRAM
      [ ]        3.2.    Configure MDS
      [ ]        3.3.    Configure GridFTP
      [ ]        4       Install MPICH G2
      [ ]        5       Test installation




                                                           -28-
                    Parallel and Distributed Group, Politehnica University of Bucharest
                                      UPB Grid Days 12-30 July, 2004



    4. Activity report
4.1. Work completed
      We managed to install Globus Toolkit 2.4 and MPICH-G2.
      We ran several test applications to verify each stage of the installation


4.2. Problems
      MPICH-G2 does not run with the latest version of Globus Toolkit 3


4.3. Remaining work
      To extend the list of test applications for Globus and MPICH


5. Conclusions
       This document describes how to install and run Globus Toolkit 2.4 and MPICH-G2. This
GRID Framework allows the programmer to ignore the complication of writing secure distribuited
applications.
       Globus Toolkit is a robust framework, proven by time an together with the Globus
implementation of MPICH allows the programmer o lot of freedom in writing it’s applications




                                                 -29-
                Parallel and Distributed Group, Politehnica University of Bucharest
                                  UPB Grid Days 12-30 July, 2004


Bibliography
   [1]   Seminar Paper, MPICH-G2 by Rene Grabner (rene.grabner@informatik.tu-chemnitz.de)
   [2]   Globus Documentation (http://www.globus.org)
   [3]   MPICH Documentation (http://www3.niu.edu/mpi)
   [4]   MPICH-G2: A Grid-Enable Implementation of the Message Passing Interface, Whitepaper
   [5]   Globus Toolkit 2.2 MDS Technology Brief, Whitepaper




                                                       -30-