GLAMEPS Grand Limited Area Model Ensemble Prediction System Plans by rockandrolldreams

VIEWS: 14 PAGES: 45

									                       GLAMEPS:
         Grand Limited Area Model Ensemble
                 Prediction System

       Plans and ideas for a contribution to a
     European-wide contribution to TIGGE – LAM
                   Trond Iversen
                       met.no & Univ. Of Oslo

   The GLAMEPS is based on discussions involving (mainly)
      Dale Barker, Jan Barkmeijer, Jose Antonio Garcia-Moya,
 Nils Gustafsson, Bent Hansen Sass, Andras Horanyi, Trond Iversen,
Martin Leutbecher, Jeanette Onvlee, Bartolome Orfila, Xiaohua Yang.
                                                Norwegian Meteorological Institute met.no
                    GLAMEPS
The first planning document has been distributed


It describes plans under the HIRLAM-ALADIN co-operation.

We hope for an extended and European-wide interest.
Today we take the European-wide perspective into account.

In this perspective the discussions afterwards should address:
- principles w.r.t. predictability and probabilistic methods;
- contributions to an implementation plan for the first year


                                           Norwegian Meteorological Institute met.no
            The GLAMEPS objective

 is in real time to provide to all HIRLAM and ALADIN partner countries * :
        an operational, quantitative basis for
     forecasting probabilities of weather events
         in Europe up to 60 hours in advance
 to the benefit of highly specified as well as general
                     applications,
       including risks of high-impact weather.


* List of partners should be extended !
                                                   Norwegian Meteorological Institute met.no
              Why ensemble prediction?
• Weather prediction is not a deterministic problem
    -The time development of atmospheric flows is critically sensitivity to
    initial conditions (deterministic chaos)
    -This property also makes the atmosphere critically dependent on the
    accuracy of boundary conditions and on inaccurate methods for making
    prognoses (i.e. model physics)

• Predictions should therefore account for all sources of uncertainty which
  critically influence the time development

• Non-linearity causes the critical sensitivity to vary with the actual state,
  i.e. “the predictability of the day” is a valuable piece of information

• If we pretend it’s deterministic, we therefore lose crucial information for
  protecting human lives and property

                                                      Norwegian Meteorological Institute met.no
            Why ensemble prediction?
     Why not rather use all resources to construct the “best”
          forecast with as high resolution as possible?


• Predictability of free flows decreases with decreasing scales;
  i.e.: higher resolution increases the need for information about spread
  and the timing of spread saturation

• Predictability of forced flows may be longer than for free flows:
  i.e.: it can give benefits to separate unpredictable features from
        those strongly influenced by e.g. topography, land-use, etc.




                                                    Norwegian Meteorological Institute met.no
                 Why ensemble prediction?
Forecast products which require information of spread:

• How certain is today’s weather forecast?

• Composit forecasts (i.e. ensemble or cluster averages), is
  better predicted than a single ”best” (control) forecast

• What are the risks of high-impact events?
     - Forecasted risk = probability x potential damage

• In a well calibrated EPS:
  Probable sources to forecast errors can be diagnosed

• Forecasts beyond the predictability limit of pure atmospheric forecasts
  (monthly, seasonal, and longer) is impossible with a deterministic strategy.
                                                          Norwegian Meteorological Institute   met.no
             Ideal approach for GLAMEPS
• An array of LAM-EPS models or model versions:
   – Each partner runs a unique model version
   – Partners who run the same model version,
     use different lower boundary data
     (SST, Sea ice, deep soil temperature and water, etc.)
   – Each partner runs between 7 and 21 ensemble members based on initial and
     lateral boundary perturbations
     (one control + pairs of symmetric initial perturbations)

• Grid resolution
   – Now 20km, later: 10km or finer, 40 levels, identical in all model versions

• Forecast range 60h - starting daily from 12 UT
• A common integration domain (!), including:
   –   North Atlantic Ocean north of ca. 15 deg N.
   –   Greenland,
   –   European part of the Arctic
   –   European continent to the Urals

   Alternatively: a minimum common overlap
                                                          Norwegian Meteorological Institute met.no
  Proposed common
  domain
Basic (lat,lon) = (68N,40W)

Size:
-0.2 deg: 301 x 381
Lower left corner: i= -40, j= -260
upper right corner: i= 260, j= 120

- 0.1 deg: 601 x 761
Lower left corner: i= -80, j= -520
upper right corner: i= 520, j= 240




                                     Norwegian Meteorological Institute met.no
  Alternatively:
  Proposed minimum
  common overlap of
  domains

Basic (lat,lon) = (68N,40W)
Size:
-0.2 deg: 211 x 233
Lower left corner: i= 30, j= -136
upper right corner: i= 240, j= 96

- 0.1 deg: 421 x 465
Lower left corner: i= 60, j= -272
upper right corner: i= 480, j= 192




                                     Norwegian Meteorological Institute met.no
                                    Schematic: Sources of prediction spread in a LAM
• Initial analysis and lateral boundaries
• Lower and upper boundaries

                                                 Non-linear filtering of
                                               unpredictable components
 Temperature, precipitation, etc.




                                                          
                                           ensemble mean better than control




                                                                                                     true development

                                                                    model x
                                                      Truth outside spread  model error?


                                     t=0                                                                           t=60h
                                                                                        Norwegian Meteorological Institute met.no
                                    Schematic: Sources of prediction spread in a LAM
• Initial analysis and lateral boundaries
• Lower and upper boundaries
 Temperature, precipitation, etc.




                                                            model y
                                                    slightly better than model x



                                                                                                true development




                                     t=0                                                                      t=60h
                                                                                   Norwegian Meteorological Institute met.no
                                   Schematic: Sources of prediction spread in a LAM
• Initial analysis and lateral boundaries
• Lower and upper boundaries
• Numerical approximations and parameterized physics
Temperature, precipitation, etc.




                                          multimodel




                                                                                            true development




                                      Warning: it’s not always this simple….

                                    t=0                                                                   t=60h
                                                                               Norwegian Meteorological Institute met.no
               Aspects of the problem
1. Operational aspects
2. Constructing initial and lateral boundary perturbations
    –   Partly contributed when each LAM-version use data-assimilation
        (can be further improved by perturbing observations)
    –   Imported global eps-members
    –   Calculate model-specific perturbations (SVs, ETKF, SLAF,…)
3. Lower boundary data perturbations
    –   Stochastic perturbations
    –   Switch surface schemes
4. Model perturbations
    –   Switching models (e.g. Aladin and Hirlam)
    –   Switching physical packages (e.g. Straco and RKKF cloud schemes)
    –   Stochastic perturbations
    –   Forcing Singular Vectors
5. EPS-calibration and probabilistic validation
6. Post-processing, graphical presentation, products
7. Further downscaling to meso- and convective scales
                                                   Norwegian Meteorological Institute met.no
                         GLAMEPS
        Partners’ declared interests so far
1. Operational aspects.
Hirlam version of GLAMEPS Laboratory: met.no, DMI, (HIRLAM partners)
Aladin version og GLAMEPS Laboratory: ???
2. Constructing initial and lateral boundary perturbations
Each partner runs control-forecasts from own data-assimilation
    analysis
Downscaling of global EPS: HMS, ZAMG
Scaled lagged average forecasting: INM
Based on global singular vectors (incl. targetted): KNMI, met.no
Based om LAM singular vectors: HMS, KNMI, (met.no)
Based on ETKF: INM, ZAMG, SMHI
Lateral pert. from global EPS: met.no
Lateral pert. from stationary forecast error statistics: SMHI

                                                   Norwegian Meteorological Institute met.no
                           GLAMEPS
            Partners’ declared interests so far
3.   Lower boundary data perturbations
Lower boundary condition perturbations: KNMI, met.no

4.   Model perturbations
Based on Forcing singular vectors or sensitivities: KNMI, met.no
Multi-physics (and similar): ZAMG, (all participants run one version)
Stochastic physics (and similar): INM, DMI

5.   EPS-calibration and probabilistic validation
GLAMEPS basic verification and validation software: INM, HMS, LTM

6.   Post-processing, graphical presentation, products
GLAMEPS basic postprocessing software: INM, HMS, LTM
BMA Bayesian model averaging: INM, KNMI

7. Further downscaling to meso- and convective scales
COSMO?
                                                         Norwegian Meteorological Institute met.no
              Quality objective
To operationally produce ensemble forecasts with
• a spread reflecting known uncertainties in data and
   model;
• a satisfactory spread-skill relationship (calibration); and
• a better probabilistic skill than the operational ECMWF
   EPS;
 for
• the chosen forecast range of 60 hours;
• our common target domain; and
• weather events of our particular interest
   (probabilistic skill parameters).


                                           Norwegian Meteorological Institute met.no
                 Validation
      of forecasts and forecast methods



      A. Murphy (1993): What is a good forecast?

1.   Consistency: correspondence between forecaster‘s
     best judgement and their forecasts
2.   Quality: correspondence between forecasts and
     matching observations
3.   Value: benefits realised by decision makers through
     the use of the forecasts


                                            Norwegian Meteorological Institute met.no
          Lower BC - relevance
Influence of North Atlantic SST in Europe?
(I.-L. Frogner, M.H. Jensen, met.no)
Targetted Forcing Singular Vectors,
ECMWF IFS, Winter, High NAO
(the 20% most sensitive days)




  Meridional
  Cross-section
  along 0 deg.
  From north pole
  To 40 deg N

                                             Norwegian Meteorological Institute met.no
  Calibration: Spread-skill ratio
  Example from medium-range (T. Palmer)




                                      Too little spread:
                                     Model error missing?


  Too much spread:
  Wrong correction
of too small spread at
         day 2-3?


                                     Norwegian Meteorological Institute met.no
      Combining probabilistic forecasts from
      several models may give better scores
      than even the better of each models.

Example (Norwegian system):
ROC – Area
as a function of precipitation
Treshold.

Black curve (NORLAMEPS)
is a combination of
the blue (LAMEPS) and the
green (TEPS)

[The red is the standard ECMWF
EPS for reference.]
                                        Norwegian Meteorological Institute met.no
     Hirlam + Aladin +……+

is, by construction, particularly well
  suited for GLAMEPS:

by commonly exploit the distributed
 resources for short-range NWP in
 Europe, for the benefit of our users.

                           Norwegian Meteorological Institute met.no
Initial step:a GLAMEPS laboratory at ECMWF,
building on existing operational experience
• To select a small set of model versions which are well
  established and significantly different, but still approx.
  equally valid representations of the atmosphere
   – different models (ALADIN, HIRLAM,…?)
   – different physical packages (STRACO and RK-KF deep conv)
• To construct initial/lateral boundary perturbations
  representative for trigging the ”instability of the day” given
  the uncertainty constraints
   – ECMWF TEPS / EPS (build on met.no LAMEPS)


• Ensemble calibration (spread-skill-ratio)
                                            Norwegian Meteorological Institute met.no
                    Operational
To run a first phase suite at ECMWF (Special Project)
   –   Some scripts are ready
   –   Some verification tools are ready
   –   Some tools for probabilistic products are ready
   –   Working week in second half of January 2007


A possibility is to establish a ”PAF” (Prediction Application
    Facility) with ECMWF

Over 1-2 years, the first suite should gradually become
    distributed to partners.
Selected results copied in real time at ECMWF,
including a set of graphical presentations

                                                   Norwegian Meteorological Institute met.no
          Further R&D in parallel
1. Through research to gradually increase ensemble size and
   error-sources
   Include lower boundary perturbations and other types of model
       perturbations
   – vary model coefficients (e.g. learn from climate modelers;
       challenge for ”physical processes – community” in NWP)
   – Targeted Forcing SVs or Forcing Sensitivities (KNMI, met.no),
   – weak 4D-Var perturbed tendencies?
   – ….
   Include alternative intial/lateral boundary perturbations
   – ETKF generalized breeding (SMHI),
   – HIRLAM and ALADIN LAM SVs (KNMI, SMHI, HMS),
   – …
2. To run a de-centralized system with real-time
   dissemination of data to ECMWF as a common central

3. Ensemble calibration in all phases
                                                   Norwegian Meteorological Institute met.no
       Thank you for your attention
Mother of Pearl clouds over Oslo, January 2002




                                                 Norwegian Meteorological Institute met.no
            Discussion Session


Proposed Rapporteurs:
Jan Barkmeijer
Andras Horanyi

The discussion should address issues as distributed to
  all participants before the meeting,

And as repeated in the following slides

                                          Norwegian Meteorological Institute met.no
                    Overall Issues
We propose that the discussion considers 5 major aspects:

1. Implementation of a GLAMEPS Version 0 at ECMWF
   (first phase, the GLAMEPS Laboratory)
2. Updates of Version 0 and implementation of a
   distributed GLAMEPS
3. Validation of GLAMEPS: Ensemble calibration and
   product verification
4. Participation in GLAMEPS, establishment of steering
   group and working teams
5. Funding and related issues, e.g. to be addressed by the
   SG and the participants separately
                                          Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
A.   How should GLAMEPS look like if we wish to incorporate contributions from all consortia?
     –  Version 0 GLAMEPS at ECMWF: build on existing systems?
     –  Version 1 and later in a distributed setting?

B.   What outcome do we seek from Version 0, and what kind of experiments do we need to achieve
     that outcome? Suggestions:
     –   Testing practical operational set-up?
     –   EPS calibration
     –   Testing alternative methods for perturbing initial and boundary data and selecting model
         versions in the process towards Version 1 (and further).
     –   Testing domain size, ensemble size, resolution, etc.
     –   Setting up forecast presentations and probabilistic validation measures.
     –   Testing clustering algorithms for selecting members for km-scale resolution.

C.   Suggested solutions for distribution of GLAMEPS forecast members to all participants in real time?
     And common data formats (GRIBII, netcdf,..)? (relevant for all versions)

D.   Domain-size, forecast length, forecast frequency, spatial resolution, ensemble size:
     how strict do we need to keep common requirements (or as common as practical possible)?




                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
A.   How should GLAMEPS look like if we wish to incorporate contributions from all consortia?
     –  Version 0 GLAMEPS at ECMWF: build on existing systems?
     –  Version 1 and later in a distributed setting?

B.   What outcome do we seek from Version 0, and what kind of experiments do we need to achieve
     that outcome? Suggestions:
     –   Testing practical operational set-up?
     –   EPS calibration
     –   Testing alternative methods for perturbing initial and boundary data and selecting model
         versions in the process towards Version 1 (and further).
     –   Testing domain size, ensemble size, resolution, etc.
     –   Setting up forecast presentations and probabilistic validation measures.
     –   Testing clustering algorithms for selecting members for km-scale resolution.

C.   Suggested solutions for distribution of GLAMEPS forecast members to all participants in real time?
     And common data formats (GRIBII, netcdf,..)? (relevant for all versions)

D.   Domain-size, forecast length, forecast frequency, spatial resolution, ensemble size:
     how strict do we need to keep common requirements (or as common as practical possible)?




                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
A.   How should GLAMEPS look like if we wish to incorporate contributions from all consortia?
     –  Version 0 GLAMEPS at ECMWF: build on existing systems?
     –  Version 1 and later in a distributed setting?

B.   What outcome do we seek from Version 0, and what kind of experiments do we need to achieve
     that outcome? Suggestions:
     –   Testing practical operational set-up?
     –   EPS calibration
     –   Testing alternative methods for perturbing initial and boundary data and selecting model
         versions in the process towards Version 1 (and further).
     –   Testing domain size, ensemble size, resolution, etc.
     –   Setting up forecast presentations and probabilistic validation measures.
     –   Testing clustering algorithms for selecting members for km-scale resolution.

C.   Suggested solutions for distribution of GLAMEPS forecast members to all participants in real time?
     And common data formats (GRIBII, netcdf,..)? (relevant for all versions)

D.   Domain-size, forecast length, forecast frequency, spatial resolution, ensemble size:
     how strict do we need to keep common requirements (or as common as practical possible)?




                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
A.   How should GLAMEPS look like if we wish to incorporate contributions from all consortia?
     –  Version 0 GLAMEPS at ECMWF: build on existing systems?
     –  Version 1 and later in a distributed setting?

B.   What outcome do we seek from Version 0, and what kind of experiments do we need to achieve
     that outcome? Suggestions:
     –   Testing practical operational set-up?
     –   EPS calibration
     –   Testing alternative methods for perturbing initial and boundary data and selecting model
         versions in the process towards Version 1 (and further).
     –   Testing domain size, ensemble size, resolution, etc.
     –   Setting up forecast presentations and probabilistic validation measures.
     –   Testing clustering algorithms for selecting members for km-scale resolution.

C.   Suggested solutions for distribution of GLAMEPS forecast members to all participants in real time?
     And common data formats (GRIBII, netcdf,..)? (relevant for all versions)

D.   Domain-size, forecast length, forecast frequency, spatial resolution, ensemble size:
     how strict do we need to keep common requirements (or as common as practical possible)?




                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
E.   Given the required forecast length of 60 h and a grid resolution of ~10 km
     (20 km in Version 0), how large domain do we need?

E.   Can we choose to rely on the quality of estimated lateral boundary uncertainty as a
     replacement of initial state uncertainties if a minimum-sized domain is chosen?

F.   How many ensemble members is the minimum needed to sample the pdf of prediction
     errors, given that we take into account uncertainties in
     –   Initial and lateral boundary conditions
     –   (upper and) lower boundary data
     –   numerical model formulation.

G.   What outcome do we seek from Version 1 and later?
     –  target domain and resolution, output products, postprocessing, availability, etc.

H.   Do we want to further downscale all or selected ensemble members on a smaller domain
     with meso-scale grid resolution (e.g. ~2km)? And how do we go about to achieve such a
     solution which all partners can benefit from in real time without wasting resources?
     –    E.g. COSMO,
     –    some WMO/WWRP projects, (Beijing Olympic research demonstration project (EPS) and
          MAP D-Phase and COPS.)
     –    This increases the requirements for disseminating larger data-sets than only used for
          weather forecasting.
                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
E.   Given the required forecast length of 60 h and a grid resolution of ~10 km
     (20 km in Version 0), how large domain do we need?

E.   Can we choose to rely on the quality of estimated lateral boundary uncertainty as a
     replacement of initial state uncertainties if a minimum-sized domain is chosen?

F.   How many ensemble members is the minimum needed to sample the pdf of prediction
     errors, given that we take into account uncertainties in
     –   Initial and lateral boundary conditions
     –   (upper and) lower boundary data
     –   numerical model formulation.

G.   What outcome do we seek from Version 1 and later?
     –  target domain and resolution, output products, postprocessing, availability, etc.

H.   Do we want to further downscale all or selected ensemble members on a smaller domain
     with meso-scale grid resolution (e.g. ~2km)? And how do we go about to achieve such a
     solution which all partners can benefit from in real time without wasting resources?
     –    E.g. COSMO,
     –    some WMO/WWRP projects, (Beijing Olympic research demonstration project (EPS) and
          MAP D-Phase and COPS.)
     –    This increases the requirements for disseminating larger data-sets than only used for
          weather forecasting.
                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
E.   Given the required forecast length of 60 h and a grid resolution of ~10 km
     (20 km in Version 0), how large domain do we need?

E.   Can we choose to rely on the quality of estimated lateral boundary uncertainty as a
     replacement of initial state uncertainties if a minimum-sized domain is chosen?

F.   How many ensemble members is the minimum needed to sample the pdf of prediction
     errors, given that we take into account uncertainties in
     –   Initial and lateral boundary conditions
     –   (upper and) lower boundary data
     –   numerical model formulation.

G.   What outcome do we seek from Version 1 and later?
     –  target domain and resolution, output products, postprocessing, availability, etc.

H.   Do we want to further downscale all or selected ensemble members on a smaller domain
     with meso-scale grid resolution (e.g. ~2km)? And how do we go about to achieve such a
     solution which all partners can benefit from in real time without wasting resources?
     –    E.g. COSMO,
     –    some WMO/WWRP projects, (Beijing Olympic research demonstration project (EPS) and
          MAP D-Phase and COPS.)
     –    This increases the requirements for disseminating larger data-sets than only used for
          weather forecasting.
                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
E.   Given the required forecast length of 60 h and a grid resolution of ~10 km
     (20 km in Version 0), how large domain do we need?

E.   Can we choose to rely on the quality of estimated lateral boundary uncertainty as a
     replacement of initial state uncertainties if a minimum-sized domain is chosen?

F.   How many ensemble members is the minimum needed to sample the pdf of prediction
     errors, given that we take into account uncertainties in
     –   Initial and lateral boundary conditions
     –   (upper and) lower boundary data
     –   numerical model formulation.

G.   What outcome do we seek from Version 1 and later?
     –  target domain and resolution, output products, postprocessing, availability, etc.

H.   Do we want to further downscale all or selected ensemble members on a smaller domain
     with meso-scale grid resolution (e.g. ~2km)? And how do we go about to achieve such a
     solution which all partners can benefit from in real time without wasting resources?
     –    E.g. COSMO,
     –    some WMO/WWRP projects, (Beijing Olympic research demonstration project (EPS) and
          MAP D-Phase and COPS.)
     –    This increases the requirements for disseminating larger data-sets than only used for
          weather forecasting.
                                                                       Norwegian Meteorological Institute met.no
Questions related to 1 (GLAMEPS Version 0)
  and 2 (later versions).
E.   Given the required forecast length of 60 h and a grid resolution of ~10 km
     (20 km in Version 0), how large domain do we need?

E.   Can we choose to rely on the quality of estimated lateral boundary uncertainty as a
     replacement of initial state uncertainties if a minimum-sized domain is chosen?

F.   How many ensemble members is the minimum needed to sample the pdf of prediction
     errors, given that we take into account uncertainties in
     –   Initial and lateral boundary conditions
     –   (upper and) lower boundary data
     –   numerical model formulation.

G.   What outcome do we seek from Version 1 and later?
     –  target domain and resolution, output products, postprocessing, availability, etc.

H.   Do we want to further downscale all or selected ensemble members on a smaller domain
     with meso-scale grid resolution (e.g. ~2km)? And how do we go about to achieve such a
     solution which all partners can benefit from in real time without wasting resources?
     –    E.g. COSMO,
     –    some WMO/WWRP projects, (Beijing Olympic research demonstration project (EPS) and
          MAP D-Phase and COPS.)
     –    This increases the requirements for disseminating larger data-sets than only used for
          weather forecasting.
                                                                       Norwegian Meteorological Institute met.no
Questions related to 3: quality and value.
J.       What measures of spread and skill should be used for ensemble calibration?
     –      Inner products?
     –      RMS?
     –      Talagrand diagrams?
     –      Other measures?
     –      How to avoid “correct” spread for the wrong reason?

K.       When selecting measures for probabilistic skill and value, what should be
         emphasized?
     –      BSS, and BSS-decomposed (resolution and reliability)
     –      ROC-curves and area
     –      Value vs. Cost-Loss ratio
     –      Other measures?
     –      Should we correct for bias error?
     –      How can we best determine climatic values in the skill and value scores?

L.       How should we go about selecting variables which are relevant for the kind of
         weather events we design GLAMEPS for?. (High impact weather,
         recommendations from TIGGE or other WMO/WWRP projects?)




                                                                        Norwegian Meteorological Institute met.no
Questions related to 3: quality and value.
J.       What measures of spread and skill should be used for ensemble calibration?
     –      Inner products?
     –      RMS?
     –      Talagrand diagrams?
     –      Other measures?
     –      How to avoid “correct” spread for the wrong reason?

K.       When selecting measures for probabilistic skill and value, what should be
         emphasized?
     –      BSS, and BSS-decomposed (resolution and reliability)
     –      ROC-curves and area
     –      Value vs. Cost-Loss ratio
     –      Other measures?
     –      Should we correct for bias error?
     –      How can we best determine climatic values in the skill and value scores?

L.       How should we go about selecting variables which are relevant for the kind of
         weather events we design GLAMEPS for?. (High impact weather,
         recommendations from TIGGE or other WMO/WWRP projects?)




                                                                        Norwegian Meteorological Institute met.no
Questions related to 3: quality and value.
J.       What measures of spread and skill should be used for ensemble calibration?
     –      Inner products?
     –      RMS?
     –      Talagrand diagrams?
     –      Other measures?
     –      How to avoid “correct” spread for the wrong reason?

K.       When selecting measures for probabilistic skill and value, what should be
         emphasized?
     –      BSS, and BSS-decomposed (resolution and reliability)
     –      ROC-curves and area
     –      Value vs. Cost-Loss ratio
     –      Other measures?
     –      Should we correct for bias error?
     –      How can we best determine climatic values in the skill and value scores?

L.       How should we go about selecting variables which are relevant for the kind of
         weather events we design GLAMEPS for?. (High impact weather,
         recommendations from TIGGE or other WMO/WWRP projects?)




                                                                        Norwegian Meteorological Institute met.no
Questions related to 4: Participation and
  organization
M.       Which consortia and which countries are ready take part in GLAMEPS?

N.       Which are interested but need further decisions before joining?
     –      Any conditions that need to be fulfilled by GLAMEPS?

O.       What kind of in-kind contributions, computer resources, and operational
         funding (travel) are required for participation?
     –      Relevant expertise in research and development?
     –      Operational experience in running LAMEPS or other EPS?
     –      Supercomputer resources for running a LAM with a minimum number of ensemble
            members?
     –      People to contribute to GLAMEPS validation, calibration and further development?

P.       A successful GLAMEPS will address several of the major objectives of TIGGE
         (Thorpex Integrated Grand Global Ensemble). Could GLAMEPS be declared as a
         contribution to TIGGE-LAM?    (A little premature, as TIGGE-LAM has not
         started yet.)

Q.       GLAMEPS Steering Group should be established and meet regularly. The SG
         could be separated into two Working Teams, such as:
     –      WT on system, technical and operational issues
     –      WT on the scientific basis and development of methods

                                                                       Norwegian Meteorological Institute met.no
Questions related to 4: Participation and
  organization
M.       Which consortia and which countries are ready take part in GLAMEPS?

N.       Which are interested but need further decisions before joining?
     –      Any conditions that need to be fulfilled by GLAMEPS?

O.       What kind of in-kind contributions, computer resources, and operational
         funding (travel) are required for participation?
     –      Relevant expertise in research and development?
     –      Operational experience in running LAMEPS or other EPS?
     –      Supercomputer resources for running a LAM with a minimum number of ensemble
            members?
     –      People to contribute to GLAMEPS validation, calibration and further development?

P.       A successful GLAMEPS will address several of the major objectives of TIGGE
         (Thorpex Integrated Grand Global Ensemble). Could GLAMEPS be declared as a
         contribution to TIGGE-LAM?    (A little premature, as TIGGE-LAM has not
         started yet.)

Q.       GLAMEPS Steering Group should be established and meet regularly. The SG
         could be separated into two Working Teams, such as:
     –      WT on system, technical and operational issues
     –      WT on the scientific basis and development of methods

                                                                       Norwegian Meteorological Institute met.no
Questions related to 4: Participation and
  organization
M.       Which consortia and which countries are ready take part in GLAMEPS?

N.       Which are interested but need further decisions before joining?
     –      Any conditions that need to be fulfilled by GLAMEPS?

O.       What kind of in-kind contributions, computer resources, and operational
         funding (travel) are required for participation?
     –      Relevant expertise in research and development?
     –      Operational experience in running LAMEPS or other EPS?
     –      Supercomputer resources for running a LAM with a minimum number of ensemble
            members?
     –      People to contribute to GLAMEPS validation, calibration and further development?

P.       A successful GLAMEPS will address several of the major objectives of TIGGE
         (Thorpex Integrated Grand Global Ensemble). Could GLAMEPS be declared as a
         contribution to TIGGE-LAM?    (A little premature, as TIGGE-LAM has not
         started yet.)

Q.       GLAMEPS Steering Group should be established and meet regularly. The SG
         could be separated into two Working Teams, such as:
     –      WT on system, technical and operational issues
     –      WT on the scientific basis and development of methods

                                                                       Norwegian Meteorological Institute met.no
Questions related to 4: Participation and
  organization
M.       Which consortia and which countries are ready take part in GLAMEPS?

N.       Which are interested but need further decisions before joining?
     –      Any conditions that need to be fulfilled by GLAMEPS?

O.       What kind of in-kind contributions, computer resources, and operational
         funding (travel) are required for participation?
     –      Relevant expertise in research and development?
     –      Operational experience in running LAMEPS or other EPS?
     –      Supercomputer resources for running a LAM with a minimum number of ensemble
            members?
     –      People to contribute to GLAMEPS validation, calibration and further development?

P.       A successful GLAMEPS will address several of the major objectives of TIGGE
         (Thorpex Integrated Grand Global Ensemble). Could GLAMEPS be declared as a
         contribution to TIGGE-LAM?    (A little premature, as TIGGE-LAM has not
         started yet.)

Q.       GLAMEPS Steering Group should be established and meet regularly. The SG
         could be separated into two Working Teams, such as:
     –      WT on system, technical and operational issues
     –      WT on the scientific basis and development of methods

                                                                       Norwegian Meteorological Institute met.no
Questions related to 4: Participation and
  organization
M.       Which consortia and which countries are ready take part in GLAMEPS?

N.       Which are interested but need further decisions before joining?
     –      Any conditions that need to be fulfilled by GLAMEPS?

O.       What kind of in-kind contributions, computer resources, and operational
         funding (travel) are required for participation?
     –      Relevant expertise in research and development?
     –      Operational experience in running LAMEPS or other EPS?
     –      Supercomputer resources for running a LAM with a minimum number of ensemble
            members?
     –      People to contribute to GLAMEPS validation, calibration and further development?

P.       A successful GLAMEPS will address several of the major objectives of TIGGE
         (Thorpex Integrated Grand Global Ensemble). Could GLAMEPS be declared as a
         contribution to TIGGE-LAM?    (A little premature, as TIGGE-LAM has not
         started yet.)

Q.       GLAMEPS Steering Group should be established and meet regularly. The SG
         could be separated into two Working Teams, such as:
     –      WT on system, technical and operational issues
     –      WT on the scientific basis and development of methods

                                                                       Norwegian Meteorological Institute met.no
Questions related to 5: Extra funding.
R. In addition to voluntary contributions as
   mentioned in question O above, how can we
   (i.e. the SG) raise extra funding or other types
   of resources for GLAMEPS?
  –   SRNWP/EUMETNET –application (re. the new SRNWP-
      structure): travel money?
  –   PAF with ECMWF (Prediction Application Facility):
      use of ECMWF resources, expertise, and software.
      Through PAF, ECMWF can be used as a data central
      for GLAMEPS and thus reduce data-transfer costs.)
  –   Can dedicating GLAMEPS to TIGGE-LAM release
      funding?
  –   Other?
                                        Norwegian Meteorological Institute met.no

								
To top