Evaluating Global Optimisation for Data Grids using Replica by etssetcf

VIEWS: 24 PAGES: 6

More Info
									     Evaluating Global Optimisation for Data Grids using Replica Location
                                  Services
         Ernest Sithole1, Gerard P. Parr1, Sally I. McClean1                        P. Dini
         School of Computing and Information Engineering                        Cisco Systems
                         Faculty of Engineering                                 San Jose, CA.
                    University of Ulster, Coleraine                                  USA
                Cromore Road, Coleraine – BT52 1SA                             pdini@cisco.com
                  Co. Londonderry, Northern Ireland
           1
             {sithole-e, gp.parr, si.mcclean}@ulster.ac.uk


                      Abstract                             geographic boundaries. Because of the distributed
                                                           nature of the Data Grid environments, there are
   As efforts to develop grid computing solutions          technical challenges which present barriers to
gather pace, considerable attention has been directed      effective operation of such joint networks.
at defining and addressing technical requirements
associated with computational grids. However less             For such data-intensive scientific applications as
focus has been given to the increasingly important         high energy nuclear physics (HENP), astrophysics
challenge of achieving ready data availability over        and computational genomics experiments to run
dispersed environments. Previous studies on data           efficiently on Grids, key technical challenges which
availability have explored replication strategies for      have to be addressed are;
Data Grids, with some incorporating both replication          (i)       load     scalability,    which     requires
and processor scheduling schemes. These endeavours                      acceptable QoS indices to be maintained
have however largely resulted in performance                            in Data Grid network as the Grid
improvement at local sites without addressing global                    responds to cope with increased volumes
optimisation for the Data Grid networks, which                          of application service requests,
typically consist of scattered nodes that participate in      (ii)      geographic scalability to ensure that all
joint experiments. This paper sets out to study                         user requests experience the same QoS
through the OPNET simulation environment, the                           guarantees regardless of the actual
impact on Data Grid performance, of employing                           physical separation of application
Replica Location Service (RLS)-based schemes. The                       requests from server entities on the Data
RLS is designed as an enabling tool for achieving                       Grid network,
global efficiency in the data replication decisions by        (iii)     dynamicity of network, which requires
locating replicated datasets at most suitable places in                 quick identification of interruptions and
distributed Grid environments. The results from our                     recovery schemes so as to deal with
simulations show that using RLS-based schemes to                        application crashes especially when long-
obtain data from neighbouring sites improves the                        lasting data transactions that have already
performance of the Data Grid network.                                   generated huge amounts of data,
                                                                        experience disruptions,
1. Introduction                                               (iv)      geographic dispersion (of both user and
                                                                        resource entities), which necessitates an
   Data Grids have been developed primarily as a                        efficient information service system for
specialised support infrastructure for the core                         monitoring the status of entire network
computational grid operations that handle data                          and,
intensive applications. Data Grids essentially provide        (v)       system heterogeneity, which renders it
the support through the management of the storage                       difficult to obtain a uniform view of
resources and the huge data sets associated with data                   network resources due to disparities
intensive applications.                                                 occurring in a number of dimensions such
                                                                        as types of operating systems, mass
   Applications for which Data Grids are put to use                     storage systems, and disk hardware
typically require joint participation of multiple user                  configurations.
communities, often involving the sharing of data and
physical resources across organisational and                  Ideally, the coordination of data and physical
                                                           resources across the Grid network should render the
required data sets readily available as if they reside     The ChicSim (Chicago Grid Simulator) [10] provides
on the local storage at processing nodes. The data         a simulation framework for studying the performance
replication technique has been widely employed in          of different combinations of job scheduling and
distributed networks to increase data availability and     asynchronous replication algorithms. In the OptorSim
reduce access latencies for applications obtaining         package described in [3], scenarios that also employ
data from remote sources.                                  job scheduling and replication methods have been
                                                           studied, and other OptorSim experiments have
   However, direct use of replication is essentially a     considered scenarios obtained from joint use of data
local-site optimisation approach with performance          replication algorithms and file accessing schemes as
improvements largely affecting the processing sites        studied in [2].
as standalone entities. An important requirement is to
render the performance gains obtained from the                An important contribution towards achieving
policies enforced at individual sites, transferable to     replica optimisation for Data Grids was the proposal
the entire Grid network typically made up of loosely       of the GIGascale Global Location Engine (GIGGLE)
coupled resources.                                         framework for implementing Replica Location
                                                           Service operation over Data Grids [11]. Depending
   In this paper, we discuss the OPNET simulation          on the specific requirements of target users on the
models which we developed for studying the                 Data Grid, a variety of the GIGGLE Replica Location
replication schemes that are based on the Replica          Service implementations can be designed and
Location Service (RLS), an information service             developed through suitable choices of values to each
system used for locating copies of replicated data sets    of the six key functional parameters of the RLS.
over Data Grids as proposed in [11].                       Further work to GIGGLE as presented in [12] has
                                                           studied the scalability and latency responses of the
   The rest of the paper is organised as follows; in the   RLS and, a study on a possible Peer–to-Peer (P2P)
next section, we look at the relevant work on              configuration to the Globus RLS is considered in
replication techniques and replica location awareness      [13].
in Data Grids. Section 3 follows with a discussion of
the key considerations that were made in developing           Our work differs from previous studies on site
our simulations. The experimental results are              replication and job scheduling schemes in that we
presented and discussed in Section 4 and, the              examine global performance of the Data Grid by
concluding section gives a brief consideration of          building replica location awareness onto the
additional features that we intend to build into our       optimisation policies already implemented locally at
model for future studies.                                  individual sites.

2. Related Work                                               As far as earlier research on RLS studies is
                                                           concerned, the focus was largely an isolated study of
   A number of research initiatives have gone into         the scalability and latency performances of RLS
developing and studying the performance of                 functions without considering how the use of RLS
replication techniques over Data Grids.                    schemes actually impacted on the operation of Data
                                                           Grid sites. Additionally, the RLS sites involved in
    In the Storage Resource Manager (SRM)                  previous testbed experiments were few in number.
framework, models for data caching have been               There are 5 sites for Giggle [11], 3 for RLS
developed by taking into account important                 Scalability tests [12], and a single local area network
characteristics associated with input data sets such as    cluster of 16 nodes is used on the Peer-to-Peer
file sizes, popularity and age of the individual data      Replica Location Service P2P RLS [13]. In contrast,
blocks as well as network and hardware related             our simulations cover almost the entire span of the
delays involved in fetching required data from source      US territory, which the 13 sites on our expanded
storage. Based one or a combination of these factors,      model of the Data Grid experiment are spread over.
various models have been developed for replacing
cached data [8, 9] and the performance of replication      3. Our Simulation Framework
policies have been compared through experiments
run over both physical test beds and in simulation.           Our approach builds on the Data Grid model we
                                                           developed in [1], that was based on United States
   Other approaches for achieving improved data            Compact Muon Solenoid (US CMS) high energy
availability have sought to combine both the CPU job       physics test bed. The key difference from the
scheduling methods and data replication techniques.        previous model is the inclusion in the current version
of more replica sites containing replicated data sets to   as an application routine that is launched from the
be identified through the RLS.                             Grid user environment. We defined the CMS
                                                           experiment as a custom application since the OPNET
   The OPNET simulation package we used for our            environment’s standard applications suite currently
models provides features for specifying geographic         has no features to specify Grid applications directly
location of the Grid sites. Additionally, the OPNET        [7]. Only one profiled user, GridUser, was specified
device models used in building the network resource        for the simulations and our model assumes that only
fabric are based on the functional characteristics of      one application, GridApp, is associated with this
real network equipment from leading manufacturers.         profile. In turn the GridApp is broken down into
We consider the use of such features a key                 tasks and only one task was defined. The task
contributor towards realistic modelling of network         contains phases, which are atomic request events sent
behaviours and trends that we set out to study.            by users to the processing sites to initiate executions
                                                           of the Grid application.
  The key considerations in our modelling approach
were gone about as follows;                                   Details of how application requests are generated
  (i)      Adoption of the US CMS Data Grid                in terms of Profile, Application, Task and Phase
           model, which we developed in [1]. The           definitions are provided in [6]. A Poisson distribution
           model is made up of four Tier 2                 with a mean of 10 seconds defines the random inter-
           processing sites and one Tier 1 site with       repetition times of the application requests.
           source data,
  (ii)     Retention       of      the      application    3.3 The CMS Job Requirements
           characteristics and requirements of CMS
           experiments,                                       The behaviour of the Data Grid job routines as
  (iii)    Adding more replica sites which serve as        they execute in the CPU server is described by two
           additional cache nodes for improved data        sets of definitions. One set of information is based on
           sharing in the Data Grid,                       the physical resource requirements of the Data Grid
  (iv)     Defining the local replication operations       Job and that information is fed into the OPNET’s
           implemented at the Tier 2 processing            General Server Attributes Object.
           sites by fixing the rates at which data sets
           are read from the local-cache and remote
           sites respectively and,
  (v)      Defining the functions of the replica                     Grid Job Requirements                Value
           location and replica selection so that
           when obtaining data from outside the             JLimit         Transaction Limit for         Infinite
           local processing site, the respective rates                     incoming Jobs
           at which data is accessed from each of the       JPolicy        Instance Limit Policy for      Queue
           candidate replica sites is determined by                        incoming Jobs
           the RLS-based control weights.                   TCPU           Average CPU Time              0.4 - 0.6
                                                                           (sec)
3.1 The Data Grid Network                                   TAppInterval   Application Request           Poisson -
                                                                           Inter-repetition Time (sec)   mean 10
   The US CMS network has the Fermi National                SInputBlock    Average Read Block Size        0 – 10
Laboratory (FNAL) as the Tier 1 site providing the                         (MB)
source data sets for processing at Tier 2 sites. The        SOutptBlock    Average Write Block Size       0 – 50
four Tier 2 sites are at Caltech, San Diego, University                    (MB)
of Florida and Madison Wisconsin. In addition to the
                                                            SCPUMem        Average Size of Memory         0 – 10
input data sets obtained from the source and the local
                                                                           - including RSS (MB)
cache sites, each Tier 2 processing site has access to
data from three replica sites and access to this data is
                                                              We tuned the job requirements to correspond to
enabled through the information provided by the
                                                           Class G1 of the HENP US Compact Muon Solenoid
RLS.
                                                           [5] experiment. The specific job requirements entered
                                                           were; average CPU time, average page faults rate,
3.2 The CMS Data Grid Application                          read and write block sizes, memory and resident sizes
                                                           and number of input and output files. See Table1.
  The high energy physics experiment which is
modelled to run on the Data Grid network is defined
   The CPU Server device attribute list has the other       3.4.2 RLS Weights for Remote Site data selection.
set of runtime characteristics of the Data Grid job         In the event of cache misses for each of the four
such as the queuing policy, the storage partitions with     cache hit rates in 3.4.1, the resultant access to
which CPUs exchange data and the actual CPU                 external sources of data is defined by fixing the
partitions to which jobs are directed. See Figure 1.        remote storage read weights according to the data
                                                            selection decisions made by the RLS.
                              Queue holding incoming
                              Data Grid Job requests
                                                               Access to external data involves obtaining data at
                                                            three neighbouring sites as well as the FNAL source
                           CPU Server                       storage (See Figure 2). Without the RLS, the access
                                                            to external data involves direct requests to the FNAL
                                                            remote site for every miss at the processing site’s
          CPU                   Resident Set Size           cache. Table 2 has the list of site and network
        Partition                                           parameters specified for the model.
                                    Memory
                                                                                      "                        !

 I/O Read and
                Read Data Blocks
                                                                 Site and Network Data               Value
 Write                                                             Access Parameters
 to Local,
                Write Data Blocks                            B          Link bandwidths              1 Gbps
 Replica, and
 Remote Storage                                                                                   speed of light
                       Page Faults
 Partitions                                                  WAccRLS    RLS-Site Read              (0 -100 %)/
                                                                        Access Weight            (NSource + NRLS )
                                                             WAccLoc    Local Site Read
                                                                        Access Weight                0 -100 %
                                                             NSource    Number of Source                 1
3.4 The RLS-based Data Grid
                                                                        Data Sites
                                                             NRLS       Number of RLS                  0-3
3.4.1 Replication Weights for Local Site Cache.
                                                                        Neighbor Sites
For each of the four Tier 2 processing sites, the data
read-in from storage is specified by tuning the Data
Grid job’s storage read access distribution weights in      3.5 Validation
order to achieve the required local cache hit rates.
                                                               The request events were generated according to
                                                            both serial and concurrent repetition patterns for the
                                                            Profile and Application specifications. Constant
               Application                                  application durations and inter-arrival times were
                Requests                                    assigned to the instantaneous requests. The durations
                                                            of the request simulations were compared with
                                                            calculated values expected for the simulation epoch.
           Tier 2
       Processing Site
                                               Local        4. Results and Discussion
                                               Cache
                                                              The Tier 2 processing sites were configured as
                                                            homogenous clusters with identical hardware
                                                            configurations and load requests. Hence each of the
    Neighbour                                               metrics considered (i.e. CPU Utilisation, Job
      Site 1                                                Competition Time, Job Completions and Aborted
                                                            Jobs) for our simulations were evaluated as follows:
                                                FNAL
                                                Tier1
                        Neighbour                           Site Metric= Arithmetic mean for server entities
                          Site N                              (MetricServer1 + ………+ MetricServerN)/N

        RLS-based access to external data                   Because of variations in the distances of processor
                                                            sites from data sources, metrics for the entire Grid
           !        "                                   #   were determined as follows;
                                                                                                                                        Average Job Completion Time          Without RLS
Network Metric = Geometric mean for Site Metrics                                                                                                                             With RLS
in the Data Grid                                                                                                    30
= (MetricSite1* ………………* MetricSiteN)1/N
                                                                                                                    25

4.1 CPU Utilisation for Job




                                                                                        Job Completion Time (sec)
                                                                                                                    20

   As shown in Figure 3, the local processing sites’
                                                                                                                    15
cache hit ratios were varied from 0 to 100%. Without
the RLS, CPU Utilisation varies considerably from
                                                                                                                    10
the lowest value (when all input data is fetched
externally from the FNAL remote site) to the
                                                                                                                     5
theoretical maximum (when all data resides in the
local cache). When the RLS is used to point to the                                                                   0
data obtainable from neighbouring sites, comparable                                                                             0         25        50        75       100
CPU Utilisation levels are obtained irrespective of                                                                                      Local Site Cache Rate (%)
local cache rates. Figure 3 shows that the RLS can
leverage a reasonably good local cache policy at the                                                                      '                    (
processing site to approach the best performance.
                                                                                      4.3 Job Completions
                                            Average Site CPU Utilisation

                                                                                         In Figure 5, the job process instances that
                      40
                                                                                      successfully run to completion are negligibly low if
                      35                                                              the local site cache hit ratios are small and the RLS is
                                                                                      not used.
                      30
                                                                                                                                Average Job Completions - 10 minute period
                      25
  % CPU Utilisation




                                                                                                                    600
                      20

                      15                                                                                            500


                      10                                                                                            400
                                                                                        Job Completions




                       5                                                                                                                                                       Without RLS
                                                                                                                    300
                                                                                                                                                                               With RLS
                       0
                                  0           25            50             75   100                                 200
                           With RLS             Local Site Cache Hit Rate (%)
                           Without RLS
                                                                                                                    100

                           $          % %      &
                                                                                                                     0
                                                                                                                                0        25        50       75       100
4.2 Job Response Time                                                                                                                   Local Site Cache Rate (%)



The results in Figure 4 provide a similar trend to that                                                                   ) *       +                   (
in Figure 3. Without RLS, Job Completion Times
vary distinctly as local cache performance at the Tier                                   When the RLS is in place, the completion of job
2 processing site changes. With the RLS in use,                                       instances increases considerably since the processors
Completion Times vary within a narrow range and                                       do not have to wait too long to get input data.
approach the lowest possible response time.
                                                                                      4.4 Aborted Jobs
   And again as in Figure 3, results here suggest that
by using the RLS to point to the data at neighbour                                       Figure 6 confirms the observation made in the
sites, only satisfactory local-cache hit levels are                                   complementary graph in Figure 5 by showing that the
needed to achieve good overall network performance.                                   number of aborted job instances is very high for low
                                                                                      local cache rates and when working without the RLS.
                             Aborted Jobs - 10 minute period
                                                                              Storage Infrastructure Parameters” ACM/IEEE Conference
                                                                              on Cluster Computing and Grids CCGrid’ 05 Conference.
                 900                                                          University of Cardiff, Wales. May 2005.
                 800
                                                                               [2] W. Bell, D. Cameron, L. Capozza, A. Millar, K.
                 700                                                          Stockinger, and F. Zini. “Simulation of Dynamic Grid
                 600
                                                                              Replication Strategies in Optorsim”. 3rd International
                                                                              IEEE Workshop on Grid Computing (Grid'         2002),
  Aborted Jobs




                 500
                                                                With RLS      Baltimore, 2002.
                                                                Without RLS
                 400
                                                                              [3] D. G. Cameron, R. C. Schiaffino, A. Millar, C.
                 300
                                                                              Nicholson, K. Stockinger, and F. Zini. “Evaluating
                 200                                                          Scheduling and Replica Optimisation Strategies in
                 100
                                                                              Optorsim”. Proceedings of the Fourth International
                                                                              Workshop on Grid Computing, Phoenix, Arizona,
                  0                                                           November 2003.
                         0   25         50        75      100
                             Local Cache Hit rate (%)
                                                                              [4] Standard Performance Evaluation Corporation. “SPEC
                       , "                                                    CPU2000 Results”.
                                                                              http://www.spec.org/cpu2000/results/index.html, January
                                                                              2005.
   When the RLS is used, rates of aborted jobs falls
sharply. This trend is expected since the CPUs at Tier                        [5] M. Ernst. “The US CMS Grid”, DESY IT Seminar, Apr
2 sites experience lower wait times by servicing                              2003.
active job instances using data from nearby sites.
                                                                              [6] OPNET Technologies Inc. “SCE User Guide for
5. Conclusion and Future Directions                                           Modeler - Using the SCE”. http://opnet.com, January 2004.

                                                                              [7] OPNET Technologies Inc. “Standard Models User
   The key observation to emerge from our studies is                          Guide - Applications Model User Guide”. http://opnet.com,
that as long as the local caching policies employed by                        January 2004.
processor sites work reasonably well, then the Data
Grid performance can be expected to approach best                             [8] E. Otoo, F. Olken, and A. Shoshani. “Disk Cache
possible levels if the RLS is used to identify nearest                        Replacement Algorithms for Storage Resource Managers in
sources of replicated data on the network.                                    Data Grids”. Proceedings of the IEEE/ACM SC2002
                                                                              Conference, Baltimore, Maryland, November 2002.
   However a number of assumptions were made
regarding the model we built; one supposition being                           [9] E. Otoo and A.Shoshani. Accurate Modeling of Cache
                                                                              Replacement Policies in a Data Grid. In Proc. 20 th
that the RLS is perfectly reliable with no errors                             IEEE/11 th NASA Goddard Conference on Mass Storage
present in the information on replica locations over                                                        03).
                                                                              Systems and Technologies (MSS' April 2003.
the Grid. In addition, overheads involved in querying
the RLS were considered negligible and perfect                                [10] K. Ranganathan and I. Foster. “Simulation Studies of
network states assumed when fetching data                                     Computation and Data Scheduling Algorithms for Data
externally.                                                                   Grids”. Journal of Grid Computing, V1(1), 2003.

   Hence as further enhancements to the future                                [11] A. Chervenak, E. Deelman, I. Foster, L. Guy, W.
version of our model, we intend to incorporate                                Hoschek, A. Iamnitchi, C. Kesselman, P. Kunszt, and M.
                                                                              Ripeanu. “Giggle: A Framework for Constructing Scalable
dynamic modulation of read access weights for data                            Replica Location Services” In Proc. IEEE Supercomputing.
sources, with factors such as network status and read                         2002.
latencies associated with source storage being used as
cost parameters by which access weights for replicas                          [12] A. Chervenak, N. Palavalli, S. Bharathi, C. Kesselman,
are dynamically modulated. A further set of                                   and R. Schwartzkopf. “Performance and Scalability of a
parameters to be factored into the simulations will be                        Replica Location Service.” In Proc. High Performance
the latencies associated with RLS queries and the                             Distributed Computing Conference (HPDC-13), Honolulu,
delays due to incorrect RLS information                                       HI., June 2004.

                                                                              [13] M. Cai, A. Chervenak, and M. Frank. “A Peer-to-Peer
6. References                                                                 Replica Location Service Based on a Distributed Hash
                                                                              Table” In Proc. Super Computing (SC-2004) Conference,
[1] E. Sithole, G. P. Parr and S.I. McClean. “Data Grid                       Pittsburg, PA, November 2005.
Performance Analysis through Study of Replication and

								
To top