Docstoc

ESMFWash02

Document Sample
ESMFWash02 Powered By Docstoc
					A High-Performance Framework for
    Earth Science Modeling and
         Data Assimilation




      V. Balaji (vb@gfdl.gov), SGI/GFDL
       First ESMF Community Meeting
        Washington, 30 May 2002NASA/GSFC
                           Outline

1.   Background
2.   ESMF Objectives and Scientific Benefits
3.   ESMF Overview
4.   Final Milestones
5.   Development Plan
6.   Beyond 2004: ESMF Evolution
                     Technological Trends

In climate research and NWP...
  increased emphasis on detailed representation of
  individual physical processes; requires many teams of
  specialists to contribute components to an overall
  coupled system


In computing technology...
   increase in hardware and software complexity in high-
   performance computing, as we shift toward the use of
   scalable computing architectures
                    Technological Trends

In software design for broad communities...
  The open source community provided a viable
  approach to constructing software to meet diverse
  requirements through “open standards”. The
  standards evolve through consultation and
  prototyping across the user community.

 “Rough consensus and working code.” (IETF)
                       Community Response

Modernization of modeling software
    Abstraction of underlying hardware to provide uniform
    programming model across vector, uniprocessor and
    scalable architectures
    Distributed development model characterized by many
    contributing authors; use of high-level language features
    for abstraction to facilitate development process
    Modular design for interchangeable dynamical cores and
    physical parameterizations, development of community-
    wide standards for components
Development of prototype frameworks
  GFDL (FMS), NASA/GSFC (GEMS).
Other framework-ready packages: NCAR/NCEP (WRF), NCAR/DOE (MCT)
The ESMF aims to unify and extend these efforts
                           Framework examples

• Atmosphere and ocean model grids.
       Shared the same model grid, fluxes passed by common block.

      Under a framework, independent model grids are connected by a
  coupler.

• Parallel implementation of legacy code.
      Take the old out-of-core solver, replace write-to-disk, read-
  from-disk with shmem_put, shmem_get.

       Under a framework, there is a uniform way to specify and fulfil
  data dependencies through high-level calls.
                      The ESMF Project

• The need to unify and extend current frameworks
  into a community standard achieves wide currency.
• NASA offers to underwrite the development of a
  community framework.
• A broad cross-section of the community meets and
  agrees to develop a concerted response to NASA,
  uniting coupled models and data assimilation in a
  common framework.
• Funding began February 2002: $10 million over three
  years.
                                  Project Organization

                             NASA ESTO/CT

Part I                         Part II                      Part III
 Core Framework                 Prognostic Model             Data Assimilation
  Development                     Deployment                   Deployment
     NSF NCAR PI                        MIT PI                    NASA DAO PI

  Part I Proposal Specific      Part II Proposal Specific     Part III Proposal Specific
         Milestones                     Milestones                    Milestones


     Joint Milestones               Joint Milestones              Joint Milestones




                             Joint Specification Team
                              Requirements Analysis
                                System Architecture
                                 API Specification
                             Design Principles

Modularity data-hiding, encapsulation, self-sufficiency;
Portability adhere to official language standards, use
   community-standard software packages, comply with internal
   standards
Performance minimize abstraction penalties of using a framework
Flexibility address a wide variety of climate issues by configuring
   particular models out of a wide choice of available components
   and modules
Extensibility design to anticipate and accommodate future needs
Community encourage users to contribute components, develop
   in open source environment
              Application Architecture


   Coupling Layer        ESMF Superstructure

     Model Layer              User Code

Fields and Grids Layer
                         ESMF Infrastructure
  Low Level Utilities

  External Libraries     BLAS, MPI, netCDF, …
          Sample call structure

   Initialize ESMF
   initialize atmos, ocean
   get grids for atmos, ocean
   initialize regrid
   time loop




                   User
Atmos and ocean models Code
must provide return_grid
call ESMF halo update, I/O etc




       ESMF Infrastructure
    provides halo update, I/O
                      Framework Architecture

Components and Coupling               gridded component interface
                                      collective data transfers/coupling
                                      collective I/O

Fields and Grids          Fields      field metadata
                                      field and field bundle data
                                      field I/O

                          Grids       grid metadata
                                      grid decomposition

                          Parallel    transpose, halo, etc.
                          Utilities   abstract machine layout

Low-Level Utilities                   event alarms
                                      performance profiling
                                      I/O primitives
                                      communication primitives, etc.
              Superstructure: CONTROL

– An MPMD ocean and atmosphere register their
  presence (“call ESMF_Init”). A separate coupler
  queries them for inputs and outputs, and verifies
  the match.
– “In an 80p coupled model, assign 32p to the
  atmosphere component, running concurrently
  with the ocean on the other 48. Run the land
  model on the atmos PEs, parallelizing river-
  routing across basins, and dynamic vegetation on
  whatever’s left.”
– Possible extension of functionality: “Get me CO2
  from the ocean. If the ocean component doesn’t
  provide it, read this file instead.”
               Superstructure: COUPLER

– “How many processors are you running on?”
– “What surface fields are you willing to provide?”
– “Here are the inputs you requested. Integrate
  forward for 3 hours and send me back your
  surface state, accumulated at the resolution of
  your timestep.”
– Possible extension of functionality: define a
  standard “ESMF_atmos” datatype.
                   Infrastructure: GRID

– Non-blocking halo update:
                call halo_wait()
                …
                call halo_update()
– Bundle data arrays for aggregate data transfer.
– Redistribute data arrays on a different
  decomposition.
– Possible extension of functionality: 3D data
  decomposition.
                   Infrastructure: GRID

– Given this function for the grid curvature,
  generate all the metrics (dx, dy, etc) needed for
  dynamics on a C-grid.
– Refine an existing grid.
– Support for various grid types (cubed-sphere,
  tripolar, …)
– Possible extension of functionality: Standard
  differential operators (e.g grad(phi), div(u) )
                 Infrastructure: REGRID

– ESMF will set a standard for writing grid overlap
  information between generalized curvilinear
  grids.
– Clip cells as needed to align boundaries on non-
  aligned grids (e.g coastlines).
– Support for various grid types (cubed-sphere,
  tripolar, …)
– Possible extension of functionality: efficient
  runtime overlap generation for adaptive meshes.
                     General features


– ESMF will be usable by models written in
  F90/C/C++.
– ESMF will be usable by models requiring adjoint
  capability.
– ESMF will be usable by models requiring shared
  or distributed memory parallelism semantics.
– ESMF will support SPMD and MPMD coupling.
– ESMF will support several I/O formats (including
  GRIB/BUFR, netCDF, HDF).
– ESMF will have uniform syntax across platforms.
                                  Building blocks

      ESMF Superstructure                      ESMF Infrastructure
•   Control: assignment of             •   Distributed grid operations
    components to processor sets,          (transpose, halo, etc.)
    scheduling of components and       •   Physical grid specification, metric
    inter-component exchange. Inter-       operations.
    component signals, including       •   Regridding: interpolation of data
    checkpointing of complete model        between grids, ungridded data.
    configurations.                    •   Fields: association of metadata
                                           with data arrays. Loose and
•   Couplers and gridded                   packed field bundles.
    components: Validation of
                                       •   I/O: on distributed data
    exchange packets. Blocking and
                                       •   Management of distributed
    non-blocking transfer of
                                           memory, data-sharing for shared
    boundary data between                  and distributed memory.
    component models. Conservation
                                       •   Time management, alarms, time
    verification. Specification of         and calendar utilities
    required interfaces for
                                       •   Performance profiling and logging,
    components.                            adaptive load-balancing.
                                       •   Error handling
                               Target Platforms

ESMF will target broad range of platforms
   – Major center hardware, e.g.
          – SP, SGI O3K, Alpha
          – 1000+ processors




   – Commodity hardware, e.g.
      • Linux clusters, desktops
                        Joint Milestone Codeset I
ID   Part I JMC: EVA Suite

a    spectral simulation at T42

b    spectral simulation at T170

c    gridpoint simulation at 1/4° x 1/4° or equivalent

d    component based on a physical domain other than the
     ocean or atmosphere, 2° x 2.5° or equivalent
e    simplified 3D-VAR system with 200K observations/day

fc   synthetic coupled SPMD system

gc   synthetic coupled MPMD system
                         Joint Milestone Codeset II
 Source     ID   Part II JMC: Modeling Applications

  GFDL      h    FMS B-grid atmosphere at N45L18
            i    FMS spectral atmosphere at T63L18
            j    FMS MOM4 ocean model at 2°x2°xL40
            k    FMS HIM isopycnal C-language ocean model at 1/6°x1/6°L22
   MIT      lc   MITgcm coupled atmosphere/ocean at 2.8°x2.8°,
                 atmosphere L5, ocean L15
            m    MITgcm regional and global ocean at 15kmL30
  NSIPP     nc   NSIPP atmospheric GCM at 2°x2.5°xL34 coupled with NSIPP
                 ocean GCM at 2/3°x1.25°L20
NCAR/LANL   oc   CCSM2 including CAM with Eulerian spectral dynamics and
                 CLM at T42L26 coupled with POP ocean and data ice model at
                 1°x1°L40
                     Joint Milestone Codeset III
Source   ID   Part III JMC: Data Assimilation Applications

DAO      p    PSAS based analysis system with 2O0K observations/day
         qc   CAM with finite volume dynamics at 2°x2.5°L55, including CLM
NCEP     r    Global atmospheric spectral model at T170L42

         s    SSI analysis system with 250K observations/day, 2 tracers
         t    WRF regional atmospheric model at 22km resolution CONUS
              forecast 345x569L50
NSIPP    uc   ODAS with OI analysis system at 1.25°x1.25°L20 resolution
              with ~10K observations/day
 MIT     v    MITgcm 2.8° century / millennium adjoint sensitivity
                              Interoperability Demo
    MODEL             MODEL           SCIENCE IMPACT

1   GFDL FMS B-grid   MITgcm ocean    Global biogeochemistry (CO2, O2), SI timescales.
    atm
2   GFDL FMS MOM4     NCEP forecast   NCEP seasonal forecasting system.
3   NSIPP ocean       LANL CICE       Sea ice model for SI, allows extension of SI
                                      system to centennial time scales.
4   NSIPP atm         DAO analysis    Assimilated initial state for SI.
5   DAO analysis      NCEP model      Intercomparison of systems for NASA/NOAA
                                      joint center for satellite data assimilation.
6   DAO CAM-fv        NCEP analysis   Intercomparison of systems for NASA/NOAA
                                      joint center for satellite data assimilation.
7   NCAR CAM Eul      MITgcm ocean    Improved climate predictive capability: climate
                                      sensitivity to large component interchange,
                                      optimized initial conditions.
8   NCEP WRF          GFDL MOM4       Development of hurricane prediction capability.




3 interoperability experiments completed in 2004, 5 by 2005.
                   Beyond 2004:ESMF Evolution

• Maintenance, support and management
   – NCAR commitment to maintain and support core ESMF
     software
   – Seeking inter-agency commitment to develop ESMF
• Technical evolution
   – Functional extension:
       • Support for advanced data assimilation algorithms: error
         covariance operators, infrastructure for generic
         variational algorithms, etc.
       • additional grids, new domains
   – Earth System Modeling Environment, including web/GUI
     interface, databases of components and experiments,
     standard diagnostic fields, standard component interfaces.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:2/14/2012
language:
pages:25